title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 4. Build [config.openshift.io/v1] | Chapter 4. Build [config.openshift.io/v1] Description Build configures the behavior of OpenShift builds for the entire cluster. This includes default settings that can be overridden in BuildConfig objects, and overrides which are applied to all builds. The canonical name is "cluster" Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec holds user-settable values for the build controller configuration 4.1.1. .spec Description Spec holds user-settable values for the build controller configuration Type object Property Type Description additionalTrustedCA object AdditionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted for image pushes and pulls during builds. The namespace for this config map is openshift-config. DEPRECATED: Additional CAs for image pull and push should be set on image.config.openshift.io/cluster instead. buildDefaults object BuildDefaults controls the default information for Builds buildOverrides object BuildOverrides controls override settings for builds 4.1.2. .spec.additionalTrustedCA Description AdditionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted for image pushes and pulls during builds. The namespace for this config map is openshift-config. DEPRECATED: Additional CAs for image pull and push should be set on image.config.openshift.io/cluster instead. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.3. .spec.buildDefaults Description BuildDefaults controls the default information for Builds Type object Property Type Description defaultProxy object DefaultProxy contains the default proxy settings for all build operations, including image pull/push and source download. Values can be overrode by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the build config's strategy. env array Env is a set of default environment variables that will be applied to the build if the specified variables do not exist on the build env[] object EnvVar represents an environment variable present in a Container. gitProxy object GitProxy contains the proxy settings for git operations only. If set, this will override any Proxy settings for all git commands, such as git clone. Values that are not set here will be inherited from DefaultProxy. imageLabels array ImageLabels is a list of docker labels that are applied to the resulting image. User can override a default label by providing a label with the same name in their Build/BuildConfig. imageLabels[] object resources object Resources defines resource requirements to execute the build. 4.1.4. .spec.buildDefaults.defaultProxy Description DefaultProxy contains the default proxy settings for all build operations, including image pull/push and source download. Values can be overrode by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the build config's strategy. Type object Property Type Description httpProxy string httpProxy is the URL of the proxy for HTTP requests. Empty means unset and will not result in an env var. httpsProxy string httpsProxy is the URL of the proxy for HTTPS requests. Empty means unset and will not result in an env var. noProxy string noProxy is a comma-separated list of hostnames and/or CIDRs and/or IPs for which the proxy should not be used. Empty means unset and will not result in an env var. readinessEndpoints array (string) readinessEndpoints is a list of endpoints used to verify readiness of the proxy. trustedCA object trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 4.1.5. .spec.buildDefaults.defaultProxy.trustedCA Description trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: \| -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.6. .spec.buildDefaults.env Description Env is a set of default environment variables that will be applied to the build if the specified variables do not exist on the build Type array 4.1.7. .spec.buildDefaults.env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 4.1.8. .spec.buildDefaults.env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 4.1.9. .spec.buildDefaults.env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 4.1.10. .spec.buildDefaults.env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 4.1.11. .spec.buildDefaults.env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 4.1.12. .spec.buildDefaults.env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.13. .spec.buildDefaults.gitProxy Description GitProxy contains the proxy settings for git operations only. If set, this will override any Proxy settings for all git commands, such as git clone. Values that are not set here will be inherited from DefaultProxy. Type object Property Type Description httpProxy string httpProxy is the URL of the proxy for HTTP requests. Empty means unset and will not result in an env var. httpsProxy string httpsProxy is the URL of the proxy for HTTPS requests. Empty means unset and will not result in an env var. noProxy string noProxy is a comma-separated list of hostnames and/or CIDRs and/or IPs for which the proxy should not be used. Empty means unset and will not result in an env var. readinessEndpoints array (string) readinessEndpoints is a list of endpoints used to verify readiness of the proxy. trustedCA object trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 4.1.14. .spec.buildDefaults.gitProxy.trustedCA Description trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: \| -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.15. .spec.buildDefaults.imageLabels Description ImageLabels is a list of docker labels that are applied to the resulting image. User can override a default label by providing a label with the same name in their Build/BuildConfig. Type array 4.1.16. .spec.buildDefaults.imageLabels[] Description Type object Property Type Description name string Name defines the name of the label. It must have non-zero length. value string Value defines the literal value of the label. 4.1.17. .spec.buildDefaults.resources Description Resources defines resource requirements to execute the build. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 4.1.18. .spec.buildDefaults.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 4.1.19. .spec.buildDefaults.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 4.1.20. .spec.buildOverrides Description BuildOverrides controls override settings for builds Type object Property Type Description forcePull boolean ForcePull overrides, if set, the equivalent value in the builds, i.e. false disables force pull for all builds, true enables force pull for all builds, independently of what each build specifies itself imageLabels array ImageLabels is a list of docker labels that are applied to the resulting image. If user provided a label in their Build/BuildConfig with the same name as one in this list, the user's label will be overwritten. imageLabels[] object nodeSelector object (string) NodeSelector is a selector which must be true for the build pod to fit on a node tolerations array Tolerations is a list of Tolerations that will override any existing tolerations set on a build pod. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 4.1.21. .spec.buildOverrides.imageLabels Description ImageLabels is a list of docker labels that are applied to the resulting image. If user provided a label in their Build/BuildConfig with the same name as one in this list, the user's label will be overwritten. Type array 4.1.22. .spec.buildOverrides.imageLabels[] Description Type object Property Type Description name string Name defines the name of the label. It must have non-zero length. value string Value defines the literal value of the label. 4.1.23. .spec.buildOverrides.tolerations Description Tolerations is a list of Tolerations that will override any existing tolerations set on a build pod. Type array 4.1.24. .spec.buildOverrides.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 4.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/builds DELETE : delete collection of Build GET : list objects of kind Build POST : create a Build /apis/config.openshift.io/v1/builds/{name} DELETE : delete a Build GET : read the specified Build PATCH : partially update the specified Build PUT : replace the specified Build /apis/config.openshift.io/v1/builds/{name}/status GET : read status of the specified Build PATCH : partially update status of the specified Build PUT : replace status of the specified Build 4.2.1. /apis/config.openshift.io/v1/builds HTTP method DELETE Description delete collection of Build Table 4.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Build Table 4.2. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty HTTP method POST Description create a Build Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.4. Body parameters Parameter Type Description body Build schema Table 4.5. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty 4.2.2. /apis/config.openshift.io/v1/builds/{name} Table 4.6. Global path parameters Parameter Type Description name string name of the Build HTTP method DELETE Description delete a Build Table 4.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Build Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Build Table 4.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Build Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body Build schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 4.2.3. /apis/config.openshift.io/v1/builds/{name}/status Table 4.15. Global path parameters Parameter Type Description name string name of the Build HTTP method GET Description read status of the specified Build Table 4.16. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Build Table 4.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Build Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body Build schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/build-config-openshift-io-v1 |
Installing on OCI | Installing on OCI OpenShift Container Platform 4.14 Installing OpenShift Container Platform on Oracle Cloud Infrastructure Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_oci/index |
Chapter 135. AutoRestart schema reference | Chapter 135. AutoRestart schema reference Used in: KafkaConnectorSpec , KafkaMirrorMaker2ConnectorSpec Full list of AutoRestart schema properties Configures automatic restarts for connectors and tasks that are in a FAILED state. When enabled, a back-off algorithm applies the automatic restart to each failed connector and its tasks. An incremental back-off interval is calculated using the formula n * n + n where n represents the number of restarts. This interval is capped at a maximum of 60 minutes. Consequently, a restart occurs immediately, followed by restarts after 2, 6, 12, 20, 30, 42, 56 minutes, and then at 60-minute intervals. By default, Streams for Apache Kafka initiates restarts of the connector and its tasks indefinitely. However, you can use the maxRestarts property to set a maximum on the number of restarts. If maxRestarts is configured and the connector still fails even after the final restart attempt, you must then restart the connector manually. For Kafka Connect connectors, use the autoRestart property of the KafkaConnector resource to enable automatic restarts of failed connectors and tasks. Enabling automatic restarts of failed connectors for Kafka Connect apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector spec: autoRestart: enabled: true If you prefer, you can also set a maximum limit on the number of restarts. Enabling automatic restarts of failed connectors for Kafka Connect with limited number of restarts apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector spec: autoRestart: enabled: true maxRestarts: 10 For MirrorMaker 2, use the autoRestart property of connectors in the KafkaMirrorMaker2 resource to enable automatic restarts of failed connectors and tasks. Enabling automatic restarts of failed connectors for MirrorMaker 2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: mirrors: - sourceConnector: autoRestart: enabled: true # ... heartbeatConnector: autoRestart: enabled: true # ... checkpointConnector: autoRestart: enabled: true # ... 135.1. AutoRestart schema properties Property Property type Description enabled boolean Whether automatic restart for failed connectors and tasks should be enabled or disabled. maxRestarts integer The maximum number of connector restarts that the operator will try. If the connector remains in a failed state after reaching this limit, it must be restarted manually by the user. Defaults to an unlimited number of restarts. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector spec: autoRestart: enabled: true",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector spec: autoRestart: enabled: true maxRestarts: 10",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: mirrors: - sourceConnector: autoRestart: enabled: true # heartbeatConnector: autoRestart: enabled: true # checkpointConnector: autoRestart: enabled: true #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-autorestart-reference |
Chapter 6. Configuring basic system security | Chapter 6. Configuring basic system security Computer security is the protection of computer systems and their hardware, software, information, and services from theft, damage, disruption, and misdirection. Ensuring computer security is an essential task, in particular in enterprises that process sensitive data and handle business transactions. This section covers only the basic security features that you can configure after installation of the operating system. 6.1. Enabling the firewalld service A firewall is a network security system that monitors and controls incoming and outgoing network traffic according to configured security rules. A firewall typically establishes a barrier between a trusted secure internal network and another outside network. The firewalld service, which provides a firewall in Red Hat Enterprise Linux, is automatically enabled during installation. To enable the firewalld service, follow this procedure. Procedure Display the current status of firewalld : If firewalld is not enabled and running, switch to the root user, and start the firewalld service and enable to start it automatically after the system restarts: Verification Check that firewalld is running and enabled: Additional resources Using and configuring firewalld man firewalld(1) 6.2. Managing firewall in the rhel 8 web console To configure the firewalld service in the web console, navigate to Networking Firewall . By default, the firewalld service is enabled. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . To enable or disable firewalld in the web console, switch the Firewall toggle button. Note Additionally, you can define more fine-grained access through the firewall to a service using the Add services button. 6.3. Managing basic SELinux settings Security-Enhanced Linux (SELinux) is an additional layer of system security that determines which processes can access which files, directories, and ports. These permissions are defined in SELinux policies. A policy is a set of rules that guide the SELinux security engine. SELinux has two possible states: Disabled Enabled When SELinux is enabled, it runs in one of the following modes: Enabled Enforcing Permissive In enforcing mode , SELinux enforces the loaded policies. SELinux denies access based on SELinux policy rules and enables only the interactions that are explicitly allowed. Enforcing mode is the safest SELinux mode and is the default mode after installation. In permissive mode , SELinux does not enforce the loaded policies. SELinux does not deny access, but reports actions that break the rules to the /var/log/audit/audit.log log. Permissive mode is the default mode during installation. Permissive mode is also useful in some specific cases, for example when troubleshooting problems. Additional resources Using SELinux 6.4. Switching SELinux modes in the RHEL 8 web console You can set SELinux mode through the RHEL 8 web console in the SELinux menu item. By default, SELinux enforcing policy in the web console is on, and SELinux operates in enforcing mode. By turning it off, you switch SELinux to permissive mode. Note that this selection is automatically reverted on the boot to the configuration defined in the /etc/sysconfig/selinux file. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . In the web console, use the Enforce policy toggle button in the SELinux menu item to turn SELinux enforcing policy on or off. 6.5. Additional resources Generating SSH key pairs Setting an OpenSSH server for key-based authentication Security hardening Using SELinux Securing networks Deploying the same SELinux configuration on multiple systems | [
"systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead)",
"systemctl enable --now firewalld",
"systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: active (running)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/assembly_configuring-system-security_configuring-basic-system-settings |
Chapter 17. Configuring a Linux instance on 64-bit IBM Z | Chapter 17. Configuring a Linux instance on 64-bit IBM Z This section describes most of the common tasks for installing Red Hat Enterprise Linux on 64-bit IBM Z. 17.1. Adding DASDs to a z/VM system Direct Access Storage Devices (DASDs) are a type of storage commonly used with 64-bit IBM Z. For more information, see Working with DASDs in the IBM Knowledge Center. The following example is how to set a DASD online, format it, and make the change persistent. Verify that the device is attached or linked to the Linux system if running under z/VM. To link a mini disk to which you have access, run the following commands: 17.2. Dynamically setting DASDs online This section contains information about setting a DASD online. Procedure Use the cio_ignore utility to remove the DASD from the list of ignored devices and make it visible to Linux: Replace device_number with the device number of the DASD. For example: Set the device online. Use a command of the following form: Replace device_number with the device number of the DASD. For example: For instructions on how to set a DASD online persistently, see Persistently setting DASDs online . 17.3. Preparing a new DASD with low-level formatting Once the disk is online, change back to the /root directory and low-level format the device. This is only required once for a DASD during its entire lifetime: When the progress bar reaches the end and the format is complete, dasdfmt prints the following output: Now, use fdasd to partition the DASD. You can create up to three partitions on a DASD. In our example here, we create one partition spanning the whole disk: After a (low-level formatted) DASD is online, it can be used like any other disk under Linux. For example, you can create file systems, LVM physical volumes, or swap space on its partitions, for example /dev/disk/by-path/ccw-0.0.4b2e-part1 . Never use the full DASD device ( dev/dasdb ) for anything but the commands dasdfmt and fdasd . If you want to use the entire DASD, create one partition spanning the entire drive as in the fdasd example above. To add additional disks later without breaking existing disk entries in, for example, /etc/fstab , use the persistent device symbolic links under /dev/disk/by-path/ . 17.4. Persistently setting DASDs online The above instructions described how to activate DASDs dynamically in a running system. However, such changes are not persistent and do not survive a reboot. Making changes to the DASD configuration persistent in your Linux system depends on whether the DASDs belong to the root file system. Those DASDs required for the root file system need to be activated very early during the boot process by the initramfs to be able to mount the root file system. The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually. 17.5. DASDs that are part of the root file system The file you have to modify to add DASDs that are part of the root file system has changed in Red Hat Enterprise Linux 8. Instead of editing the /etc/zipl.conf file, the new file to be edited, and its location, may be found by running the following commands: There is one boot option to activate DASDs early in the boot process: rd.dasd= . This option takes a Direct Access Storage Device (DASD) adapter device bus identifier. For multiple DASDs, specify the parameter multiple times, or use a comma separated list of bus IDs. To specify a range of DASDs, specify the first and the last bus ID. Below is an example of the /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf file for a system that uses physical volumes on partitions of two DASDs for an LVM volume group vg_devel1 that contains a logical volume lv_root for the root file system. To add another physical volume on a partition of a third DASD with device bus ID 0.0.202b . To do this, add rd.dasd=0.0.202b to the parameters line of your boot kernel in /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf : Warning Make sure the length of the kernel command line in the configuration file does not exceed 896 bytes. Otherwise, the boot loader cannot be saved, and the installation fails. Run zipl to apply the changes of the configuration file for the IPL: 17.6. DASDs that are not part of the root file system Direct Access Storage Devices (DASDs) that are not part of the root file system, that is, data disks , are persistently configured in the /etc/dasd.conf file. This file contains one DASD per line, where each line begins with the DASD's bus ID. When adding a DASD to the /etc/dasd.conf file, use key-value pairs to specify the options for each entry. Separate the key and its value with an equal (=) sign. When adding multiple options, use a space or a tab to separate each option. Example /etc/dasd.conf file Changes to the /etc/dasd.conf file take effect after a system reboot or after a new DASD is dynamically added by changing the system's I/O configuration (that is, the DASD is attached under z/VM). Alternatively, to activate a DASD that you have added to the /etc/dasd.conf file, complete the following steps: Remove the DASD from the list of ignored devices and make it visible using the cio_ignore utility: where device_number is the DASD device number. For example, if the device number is 021a , run: Activate the DASD by writing to the device's uevent attribute: where dasd-bus-ID is the DASD's bus ID. For example, if the bus ID is 0.0.021a , run: 17.7. FCP LUNs that are part of the root file system The only file you have to modify for adding FCP LUNs that are part of the root file system has changed in Red Hat Enterprise Linux 8. Instead of editing the /etc/zipl.conf file, the new file to be edited, and its location, may be found by running the following commands: Red Hat Enterprise Linux provides a parameter to activate FCP LUNs early in the boot process: rd.zfcp= . The value is a comma-separated list containing the FCP device bus ID, the target WWPN as 16 digit hexadecimal number prefixed with 0x , and the FCP LUN prefixed with 0x and padded with zeroes to the right to have 16 hexadecimal digits. The WWPN and FCP LUN values are only necessary if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-8.6 or older releases. Otherwise they can be omitted, for example, rd.zfcp=0.0.4000 . Below is an example of the /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf file for a system that uses physical volumes on partitions of two FCP LUNs for an LVM volume group vg_devel1 that contains a logical volume lv_root for the root file system. For simplicity, the example shows a configuration without multipathing. To add another physical volume on a partition of a third FCP LUN with device bus ID 0.0.fc00, WWPN 0x5105074308c212e9 and FCP LUN 0x401040a300000000, add rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 to the parameters line of your boot kernel in /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf . For example: Warning Make sure the length of the kernel command line in the configuration file does not exceed 896 bytes. Otherwise, the boot loader cannot be saved, and the installation fails. Run dracut -f to update the initial RAM disk of your target kernel. Run zipl to apply the changes of the configuration file for the IPL: 17.8. FCP LUNs that are not part of the root file system FCP LUNs that are not part of the root file system, such as data disks, are persistently configured in the file /etc/zfcp.conf . It contains one FCP LUN per line. Each line contains the device bus ID of the FCP adapter, the target WWPN as 16 digit hexadecimal number prefixed with 0x , and the FCP LUN prefixed with 0x and padded with zeroes to the right to have 16 hexadecimal digits, separated by a space or tab. The WWPN and FCP LUN values are only necessary if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-8.6 or older releases. Otherwise they can be omitted and only the device bus ID is mandatory. Entries in /etc/zfcp.conf are activated and configured by udev when an FCP adapter is added to the system. At boot time, all FCP adapters visible to the system are added and trigger udev . Example content of /etc/zfcp.conf : Modifications of /etc/zfcp.conf only become effective after a reboot of the system or after the dynamic addition of a new FCP channel by changing the system's I/O configuration (for example, a channel is attached under z/VM). Alternatively, you can trigger the activation of a new entry in /etc/zfcp.conf for an FCP adapter which was previously not active, by executing the following commands: Use the cio_ignore utility to remove the FCP adapter from the list of ignored devices and make it visible to Linux: Replace device_number with the device number of the FCP adapter. For example: To trigger the uevent that activates the change, issue: For example: 17.9. Adding a qeth device The qeth network device driver supports 64-bit IBM Z OSA-Express features in QDIO mode, HiperSockets, z/VM guest LAN, and z/VM VSWITCH. For more information about the qeth device driver naming scheme, see Customizing boot parameters . 17.10. Dynamically adding a qeth device This section contains information about how to add a qeth device dynamically. Procedure Determine whether the qeth device driver modules are loaded. The following example shows loaded qeth modules: If the output of the lsmod command shows that the qeth modules are not loaded, run the modprobe command to load them: Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux: Replace read_device_bus_id , write_device_bus_id , data_device_bus_id with the three device bus IDs representing a network device. For example, if the read_device_bus_id is 0.0.f500 , the write_device_bus_id is 0.0.f501 , and the data_device_bus_id is 0.0.f502 : Use the znetconf utility to sense and list candidate configurations for network devices: Select the configuration you want to work with and use znetconf to apply the configuration and to bring the configured group device online as network device. Optional: You can also pass arguments that are configured on the group device before it is set online: Now you can continue to configure the encf500 network interface. Alternatively, you can use sysfs attributes to set the device online as follows: Create a qeth group device: For example: , verify that the qeth group device was created properly by looking for the read channel: You can optionally set additional parameters and features, depending on the way you are setting up your system and the features you require, such as: portno layer2 portname Bring the device online by writing 1 to the online sysfs attribute: Then verify the state of the device: A return value of 1 indicates that the device is online, while a return value 0 indicates that the device is offline. Find the interface name that was assigned to the device: Now you can continue to configure the encf500 network interface. The following command from the s390utils package shows the most important settings of your qeth device: 17.11. Persistently adding a qeth device To make your new qeth device persistent, you need to create the configuration file for your new interface. The network interface configuration files are placed in the /etc/sysconfig/network-scripts/ directory. The network configuration files use the naming convention ifcfg- device , where device is the value found in the if_name file in the qeth group device that was created earlier, for example enc9a0 . The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually. If a configuration file for another device of the same type already exists, the simplest way to add the config file is to copy it to the new name and then edit it: To learn IDs of your network devices, use the lsqeth utility: If you do not have a similar device defined, you must create a new file. Use this example of /etc/sysconfig/network-scripts/ifcfg-0.0.09a0 as a template: Edit the new ifcfg-0.0.0600 file as follows: Modify the DEVICE statement to reflect the contents of the if_name file from your ccw group. Modify the IPADDR statement to reflect the IP address of your new interface. Modify the NETMASK statement as needed. If the new interface is to be activated at boot time, then make sure ONBOOT is set to yes . Make sure the SUBCHANNELS statement matches the hardware addresses for your qeth device. Modify the PORTNAME statement or leave it out if it is not necessary in your environment. You can add any valid sysfs attribute and its value to the OPTIONS parameter. The Red Hat Enterprise Linux installation program currently uses this to configure the layer mode ( layer2 ) and the relative port number ( portno ) of qeth devices. The qeth device driver default for OSA devices is now layer 2 mode. To continue using old ifcfg definitions that rely on the default of layer 3 mode, add layer2=0 to the OPTIONS parameter. /etc/sysconfig/network-scripts/ifcfg-0.0.0600 Changes to an ifcfg file only become effective after rebooting the system or after the dynamic addition of new network device channels by changing the system's I/O configuration (for example, attaching under z/VM). Alternatively, you can trigger the activation of a ifcfg file for network channels which were previously not active yet, by executing the following commands: Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux: Replace read_device_bus_id , write_device_bus_id , data_device_bus_id with the three device bus IDs representing a network device. For example, if the read_device_bus_id is 0.0.0600 , the write_device_bus_id is 0.0.0601 , and the data_device_bus_id is 0.0.0602 : To trigger the uevent that activates the change, issue: For example: Check the status of the network device: Now start the new interface: Check the status of the interface: Check the routing for the new interface: Verify your changes by using the ping utility to ping the gateway or another host on the subnet of the new device: If the default route information has changed, you must also update /etc/sysconfig/network accordingly. Additional resources nm-settings-keyfile man page on your system 17.12. Configuring an 64-bit IBM Z network device for network root file system To add a network device that is required to access the root file system, you only have to change the boot options. The boot options can be in a parameter file, however, the /etc/zipl.conf file no longer contains specifications of the boot records. The file that needs to be modified can be located using the following commands: Dracut , the mkinitrd successor that provides the functionality in the initramfs that in turn replaces initrd , provides a boot parameter to activate network devices on 64-bit IBM Z early in the boot process: rd.znet= . As input, this parameter takes a comma-separated list of the NETTYPE (qeth, lcs, ctc), two (lcs, ctc) or three (qeth) device bus IDs, and optional additional parameters consisting of key-value pairs corresponding to network device sysfs attributes. This parameter configures and activates the 64-bit IBM Z network hardware. The configuration of IP addresses and other network specifics works the same as for other platforms. See the dracut documentation for more details. The cio_ignore commands for the network channels are handled transparently on boot. Example boot options for a root file system accessed over the network through NFS: 17.13. Additional resources Device Drivers, Features, and Commands on RHEL . | [
"CP ATTACH EB1C TO *",
"CP LINK RHEL7X 4B2E 4B2E MR DASD 4B2E LINKED R/W",
"cio_ignore -r device_number",
"cio_ignore -r 4b2e",
"chccwdev -e device_number",
"chccwdev -e 4b2e",
"cd /root # dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0.4b2e Drive Geometry: 10017 Cylinders * 15 Heads = 150255 Tracks I am going to format the device /dev/disk/by-path/ccw-0.0.4b2e in the following way: Device number of device : 0x4b2e Labelling device : yes Disk label : VOL1 Disk identifier : 0X4B2E Extent start (trk no) : 0 Extent end (trk no) : 150254 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type \"yes\" to continue, no will leave the disk untouched: yes cyl 97 of 3338 |#----------------------------------------------| 2%",
"Rereading the partition table Exiting",
"fdasd -a /dev/disk/by-path/ccw-0.0.4b2e reading volume label ..: VOL1 reading vtoc ..........: ok auto-creating one partition for the whole disk writing volume label writing VTOC rereading partition table",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf' Target device information Device..........................: 5e:00 Partition.......................: 5e:01 Device name.....................: dasda Device driver name..............: dasd DASD device number..............: 0201 Type............................: disk partition Disk layout.....................: ECKD/compatible disk layout Geometry - heads................: 15 Geometry - sectors..............: 12 Geometry - cylinders............: 13356 Geometry - start................: 24 File system block size..........: 4096 Physical block size.............: 4096 Device size in physical blocks..: 262152 Building bootmap in '/boot' Building menu 'zipl-automatic-menu' Adding #1: IPL section '4.18.0-80.el8.s390x' (default) initial ramdisk...: /boot/initramfs-4.18.0-80.el8.s390x.img kernel image......: /boot/vmlinuz-4.18.0-80.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' component address: kernel image....: 0x00010000-0x0049afff parmline........: 0x0049b000-0x0049bfff initial ramdisk.: 0x004a0000-0x01a26fff internal loader.: 0x0000a000-0x0000cfff Preparing boot menu Interactive prompt......: enabled Menu timeout............: 5 seconds Default configuration...: '4.18.0-80.el8.s390x' Preparing boot device: dasda (0201). Syncing disks Done.",
"0.0.0207 0.0.0200 use_diag=1 readonly=1",
"cio_ignore -r device_number",
"cio_ignore -r 021a",
"echo add > /sys/bus/ccw/devices/ dasd-bus-ID /uevent",
"echo add > /sys/bus/ccw/devices/0.0.021a/uevent",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"title Red Hat Enterprise Linux (4.18.0-32.el8.s390x) 8.0 (Ootpa) version 4.18.0-32.el8.s390x linux /boot/vmlinuz-4.18.0-32.el8.s390x initrd /boot/initramfs-4.18.0-32.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-32.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"title Red Hat Enterprise Linux (4.18.0-32.el8.s390x) 8.0 (Ootpa) version 4.18.0-32.el8.s390x linux /boot/vmlinuz-4.18.0-32.el8.s390x initrd /boot/initramfs-4.18.0-32.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-32.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf' Target device information Device..........................: 08:00 Partition.......................: 08:01 Device name.....................: sda Device driver name..............: sd Type............................: disk partition Disk layout.....................: SCSI disk layout Geometry - start................: 2048 File system block size..........: 4096 Physical block size.............: 512 Device size in physical blocks..: 10074112 Building bootmap in '/boot/' Building menu 'rh-automatic-menu' Adding #1: IPL section '4.18.0-32.el8.s390x' (default) kernel image......: /boot/vmlinuz-4.18.0-32.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' initial ramdisk...: /boot/initramfs-4.18.0-32.el8.s390x.img component address: kernel image....: 0x00010000-0x007a21ff parmline........: 0x00001000-0x000011ff initial ramdisk.: 0x02000000-0x028f63ff internal loader.: 0x0000a000-0x0000a3ff Preparing boot device: sda. Detected SCSI PCBIOS disk layout. Writing SCSI master boot record. Syncing disks Done.",
"0.0.fc00 0x5105074308c212e9 0x401040a000000000 0.0.fc00 0x5105074308c212e9 0x401040a100000000 0.0.fc00 0x5105074308c212e9 0x401040a300000000 0.0.fcd0 0x5105074308c2aee9 0x401040a000000000 0.0.fcd0 0x5105074308c2aee9 0x401040a100000000 0.0.fcd0 0x5105074308c2aee9 0x401040a300000000 0.0.4000 0.0.5000",
"cio_ignore -r device_number",
"cio_ignore -r fcfc",
"echo add > /sys/bus/ccw/devices/device-bus-ID/uevent",
"echo add > /sys/bus/ccw/devices/0.0.fcfc/uevent",
"lsmod | grep qeth qeth_l3 69632 0 qeth_l2 49152 1 qeth 131072 2 qeth_l3,qeth_l2 qdio 65536 3 qeth,qeth_l3,qeth_l2 ccwgroup 20480 1 qeth",
"modprobe qeth",
"cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id",
"cio_ignore -r 0.0.f500,0.0.f501,0.0.f502",
"znetconf -u Scanning for network devices Device IDs Type Card Type CHPID Drv. ------------------------------------------------------------ 0.0.f500,0.0.f501,0.0.f502 1731/01 OSA (QDIO) 00 qeth 0.0.f503,0.0.f504,0.0.f505 1731/01 OSA (QDIO) 01 qeth 0.0.0400,0.0.0401,0.0.0402 1731/05 HiperSockets 02 qeth",
"znetconf -a f500 Scanning for network devices Successfully configured device 0.0.f500 (encf500)",
"znetconf -a f500 -o portname=myname Scanning for network devices Successfully configured device 0.0.f500 (encf500)",
"echo read_device_bus_id,write_device_bus_id,data_device_bus_id > /sys/bus/ccwgroup/drivers/qeth/group",
"echo 0.0.f500,0.0.f501,0.0.f502 > /sys/bus/ccwgroup/drivers/qeth/group",
"ls /sys/bus/ccwgroup/drivers/qeth/0.0.f500",
"echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online 1",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/if_name encf500",
"lsqeth encf500 Device name : encf500 ------------------------------------------------- card_type : OSD_1000 cdev0 : 0.0.f500 cdev1 : 0.0.f501 cdev2 : 0.0.f502 chpid : 76 online : 1 portname : OSAPORT portno : 0 state : UP (LAN ONLINE) priority_queueing : always queue 0 buffer_count : 16 layer2 : 1 isolation : none",
"cd /etc/sysconfig/network-scripts # cp ifcfg-enc9a0 ifcfg-enc600",
"lsqeth -p devices CHPID interface cardtype port chksum prio-q'ing rtr4 rtr6 lay'2 cnt -------------------------- ----- ---------------- -------------- ---- ------ ---------- ---- ---- ----- ----- 0.0.09a0/0.0.09a1/0.0.09a2 x00 enc9a0 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64 0.0.0600/0.0.0601/0.0.0602 x00 enc600 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64",
"IBM QETH DEVICE=enc9a0 BOOTPROTO=static IPADDR=10.12.20.136 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.09a0,0.0.09a1,0.0.09a2 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:23:65:1a TYPE=Ethernet",
"IBM QETH DEVICE=enc600 BOOTPROTO=static IPADDR=192.168.70.87 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.0600,0.0.0601,0.0.0602 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:b3:84:ef TYPE=Ethernet",
"cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id",
"cio_ignore -r 0.0.0600,0.0.0601,0.0.0602",
"echo add > /sys/bus/ccw/devices/read-channel/uevent",
"echo add > /sys/bus/ccw/devices/0.0.0600/uevent",
"lsqeth",
"ifup enc600",
"ip addr show enc600 3: enc600: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 3c:97:0e:51:38:17 brd ff:ff:ff:ff:ff:ff inet 10.85.1.245/24 brd 10.34.3.255 scope global dynamic enc600 valid_lft 81487sec preferred_lft 81487sec inet6 1574:12:5:1185:3e97:eff:fe51:3817/64 scope global noprefixroute dynamic valid_lft 2591994sec preferred_lft 604794sec inet6 fe45::a455:eff:d078:3847/64 scope link valid_lft forever preferred_lft forever",
"ip route default via 10.85.1.245 dev enc600 proto static metric 1024 12.34.4.95/24 dev enp0s25 proto kernel scope link src 12.34.4.201 12.38.4.128 via 12.38.19.254 dev enp0s25 proto dhcp metric 1 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1",
"ping -c 1 192.168.70.8 PING 192.168.70.8 (192.168.70.8) 56(84) bytes of data. 64 bytes from 192.168.70.8: icmp_seq=0 ttl=63 time=8.07 ms",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfs‐server.subdomain.domain:enc9a0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/configuring-a-linux-instance-on-ibm-z_rhel-installer |
7.320. haproxy | 7.320. haproxy 7.320.1. RHSA-2013:1120 - Moderate: haproxy security update An updated haproxy package that fixes one security issue is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. HAProxy provides high availability, load balancing, and proxying for TCP and HTTP-based applications. Security Fix CVE-2013-2175 A flaw was found in the way HAProxy handled requests when the proxy's configuration ("/etc/haproxy/haproxy.cfg") had certain rules that use the hdr_ip criterion. A remote attacker could use this flaw to crash HAProxy instances that use the affected configuration. Red Hat would like to thank HAProxy upstream for reporting this issue. Upstream acknowledges David Torgerson as the original reporter. HAProxy is released as a Technology Preview in Red Hat Enterprise Linux 6. More information about Red Hat Technology Previews is available at https://access.redhat.com/support/offerings/techpreview/ . All users of haproxy are advised to upgrade to this updated package, which contains a backported patch to correct this issue. 7.320.2. RHSA-2013:0868 - Moderate: haproxy security update An updated haproxy package that fixes one security issue is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. HAProxy provides high availability, load balancing, and proxying for TCP and HTTP-based applications. Security Fix CVE-2013-1912 A buffer overflow flaw was found in the way HAProxy handled pipelined HTTP requests. A remote attacker could send pipelined HTTP requests that would cause HAProxy to crash or, potentially, execute arbitrary code with the privileges of the user running HAProxy. This issue only affected systems using all of the following combined configuration options: HTTP keep alive enabled, HTTP keywords in TCP inspection rules, and request appending rules. Red Hat would like to thank Willy Tarreau of HAProxy upstream for reporting this issue. Upstream acknowledges Yves Lafon from the W3C as the original reporter. HAProxy is released as a Technology Preview in Red Hat Enterprise Linux 6. More information about Red Hat Technology Previews is available at https://access.redhat.com/support/offerings/techpreview/ All users of haproxy are advised to upgrade to this updated package, which contains a backported patch to correct this issue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/haproxy |
Chapter 2. Overview of the Cluster Samples Operator | Chapter 2. Overview of the Cluster Samples Operator The Cluster Samples Operator, which operates in the openshift namespace, installs and updates the Red Hat OpenShift Service on AWS image streams and Red Hat OpenShift Service on AWS templates. The Cluster Samples Operator is being deprecated Starting from Red Hat OpenShift Service on AWS 4.16, the Cluster Samples Operator is deprecated. No new templates, samples, or non-Source-to-Image (Non-S2I) image streams will be added to the Cluster Samples Operator. However, the existing S2I builder image streams and templates will continue to receive updates until the Cluster Samples Operator is removed in a future release. S2I image streams and templates include: Ruby Python Node.js Perl PHP HTTPD Nginx EAP Java Webserver .NET Go The Cluster Samples Operator will stop managing and providing support to the non-S2I samples (image streams and templates). You can contact the image stream or template owner for any requirements and future plans. In addition, refer to the list of the repositories hosting the image stream or templates . 2.1. Understanding the Cluster Samples Operator During installation, the Operator creates the default configuration object for itself and then creates the sample image streams and templates, including quick start templates. Note To facilitate image stream imports from other registries that require credentials, a cluster administrator can create any additional secrets that contain the content of a Docker config.json file in the openshift namespace needed for image import. The Cluster Samples Operator configuration is a cluster-wide resource, and the deployment is contained within the openshift-cluster-samples-operator namespace. The image for the Cluster Samples Operator contains image stream and template definitions for the associated Red Hat OpenShift Service on AWS release. When each sample is created or updated, the Cluster Samples Operator includes an annotation that denotes the version of Red Hat OpenShift Service on AWS. The Operator uses this annotation to ensure that each sample matches the release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator, where that version annotation is modified or deleted, are reverted automatically. Note The Jenkins images are part of the image payload from installation and are tagged into the image streams directly. The Cluster Samples Operator configuration resource includes a finalizer which cleans up the following upon deletion: Operator managed image streams. Operator managed templates. Operator generated configuration resources. Cluster status resources. Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. 2.1.1. Cluster Samples Operator's use of management state The Cluster Samples Operator is bootstrapped as Managed by default or if global proxy is configured. In the Managed state, the Cluster Samples Operator is actively managing its resources and keeping the component active in order to pull sample image streams and images from the registry and ensure that the requisite sample templates are installed. Certain circumstances result in the Cluster Samples Operator bootstrapping itself as Removed including: If the Cluster Samples Operator cannot reach registry.redhat.io after three minutes on initial startup after a clean installation. If the Cluster Samples Operator detects it is on an IPv6 network. Note For Red Hat OpenShift Service on AWS, the default image registry is registry.access.redhat.com or quay.io . However, if the Cluster Samples Operator detects that it is on an IPv6 network and an Red Hat OpenShift Service on AWS global proxy is configured, then IPv6 check supersedes all the checks. As a result, the Cluster Samples Operator bootstraps itself as Removed . Important IPv6 installations are not currently supported by registry.redhat.io . The Cluster Samples Operator pulls most of the sample image streams and images from registry.redhat.io . 2.1.2. Cluster Samples Operator's tracking and error recovery of image stream imports After creation or update of a samples image stream, the Cluster Samples Operator monitors the progress of each image stream tag's image import. If an import fails, the Cluster Samples Operator retries the import through the image stream image import API, which is the same API used by the oc import-image command, approximately every 15 minutes until it sees the import succeed, or if the Cluster Samples Operator's configuration is changed such that either the image stream is added to the skippedImagestreams list, or the management state is changed to Removed . Additional resources If the Cluster Samples Operator is removed during installation, you can use the Cluster Samples Operator with an alternate registry so content can be imported, and then set the Cluster Samples Operator to Managed to get the samples. 2.2. Removing deprecated image stream tags from the Cluster Samples Operator The Cluster Samples Operator leaves deprecated image stream tags in an image stream because users can have deployments that use the deprecated image stream tags. You can remove deprecated image stream tags by editing the image stream with the oc tag command. Note Deprecated image stream tags that the samples providers have removed from their image streams are not included on initial installations. Prerequisites You installed the oc CLI. Procedure Remove deprecated image stream tags by editing the image stream with the oc tag command. USD oc tag -d <image_stream_name:tag> Example output Deleted tag default/<image_stream_name:tag>. Additional resources For more information about configuring credentials, see Using image pull secrets . | [
"oc tag -d <image_stream_name:tag>",
"Deleted tag default/<image_stream_name:tag>."
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/images/configuring-samples-operator |
Chapter 6. Adding Storage for Red Hat Virtualization | Chapter 6. Adding Storage for Red Hat Virtualization Add storage as data domains in the new environment. A Red Hat Virtualization environment must have at least one data domain, but adding more is recommended. Add the storage you prepared earlier: NFS iSCSI Fibre Channel (FCP) POSIX-compliant file system Local storage Red Hat Gluster Storage 6.1. Adding NFS Storage This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain. If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list. Procedure In the Administration Portal, click Storage Domains . Click New Domain . Enter a Name for the storage domain. Accept the default values for the Data Center , Domain Function , Storage Type , Format , and Host lists. Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data . Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . The new NFS data domain has a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center. 6.2. Adding iSCSI Storage This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the new storage domain. Select a Data Center from the drop-down list. Select Data as the Domain Function and iSCSI as the Storage Type . Select an active host as the Host . Important Communication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured. The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the step. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note LUNs used externally for the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs. Important If you use the REST API method discoveriscsi to discover the iscsi targets, you can use an FQDN or an IP address, but you must use the iscsi details from the discovered targets results to log in using the REST API method iscsilogin . See discoveriscsi in the REST API Guide for more information. Enter the FQDN or IP address of the iSCSI host in the Address field. Enter the port with which to connect to the host when browsing for targets in the Port field. The default is 3260 . If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password . Note You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information. Click Discover . Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets. Important If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported. Important When using the REST API iscsilogin method to log in, you must use the iscsi details from the discovered targets results in the discoveriscsi method. See iscsilogin in the REST API Guide for more information. Click the + button to the desired target. This expands the entry and displays all unused LUNs attached to the target. Select the check box for each LUN that you are using to create the storage domain. Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding. If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond . 6.3. Adding FCP Storage This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the storage domain. Select an FCP Data Center from the drop-down list. If you do not yet have an appropriate FCP data center, select (none) . Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available. Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . The new FCP data domain remains in a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center. 6.4. Adding POSIX-compliant File System Storage This procedure shows you how to attach existing POSIX-compliant file system storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name for the storage domain. Select the Data Center to be associated with the storage domain. The data center selected must be of type POSIX (POSIX compliant FS) . Alternatively, select (none) . Select Data from the Domain Function drop-down list, and POSIX compliant FS from the Storage Type drop-down list. If applicable, select the Format from the drop-down menu. Select a host from the Host drop-down list. Enter the Path to the POSIX file system, as you would normally provide it to the mount command. Enter the VFS Type , as you would normally provide it to the mount command using the -t argument. See man mount for a list of valid VFS types. Enter additional Mount Options , as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value in the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value in the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . 6.5. Adding a local storage domain When adding a local storage domain to a host, setting the path to the local storage directory automatically creates and places the host in a local data center, local cluster, and local storage domain. Procedure Click Compute Hosts and select the host. Click Management Maintenance and OK . The host's status changes to Maintenance . Click Management Configure Local Storage . Click the Edit buttons to the Data Center , Cluster , and Storage fields to configure and name the local storage domain. Set the path to your local storage in the text entry field. If applicable, click the Optimization tab to configure the memory optimization policy for the new local storage cluster. Click OK . The Manager sets up the local data center with a local cluster, local storage domain. It also changes the host's status to Up . Verification Click Storage Domains . Locate the local storage domain you just added. The domain's status should be Active ( ), and the value in the Storage Type column should be Local on Host . You can now upload a disk image in the new local storage domain. 6.6. Adding Red Hat Gluster Storage To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/adding_storage_domains_to_rhv_sm_remotedb_deploy |
1.4. Installing Supporting Components on Client Machines | 1.4. Installing Supporting Components on Client Machines 1.4.1. Installing Console Components A console is a graphical window that allows you to view the start up screen, shut down screen, and desktop of a virtual machine, and to interact with that virtual machine in a similar way to a physical machine. In Red Hat Virtualization, the default application for opening a console to a virtual machine is Remote Viewer, which must be installed on the client machine prior to use. 1.4.1.1. Installing Remote Viewer on Red Hat Enterprise Linux The Remote Viewer application provides users with a graphical console for connecting to virtual machines. Once installed, it is called automatically when attempting to open a SPICE session with a virtual machine. Alternatively, it can also be used as a standalone application. Remote Viewer is included in the virt-viewer package provided by the base Red Hat Enterprise Linux Workstation and Red Hat Enterprise Linux Server repositories. Procedure Install the virt-viewer package: # dnf install virt-viewer Restart your browser for the changes to take effect. You can now connect to your virtual machines using either the SPICE protocol or the VNC protocol. 1.4.1.2. Installing Remote Viewer on Windows The Remote Viewer application provides users with a graphical console for connecting to virtual machines. Once installed, it is called automatically when attempting to open a SPICE session with a virtual machine. Alternatively, it can also be used as a standalone application. Installing Remote Viewer on Windows Open a web browser and download one of the following installers according to the architecture of your system. Virt Viewer for 32-bit Windows: https:// your-manager-fqdn /ovirt-engine/services/files/spice/virt-viewer-x86.msi Virt Viewer for 64-bit Windows: https:// your-manager-fqdn /ovirt-engine/services/files/spice/virt-viewer-x64.msi Open the folder where the file was saved. Double-click the file. Click Run if prompted by a security warning. Click Yes if prompted by User Account Control. Remote Viewer is installed and can be accessed via Remote Viewer in the VirtViewer folder of All Programs in the start menu. 1.4.1.3. Installing usbdk on Windows usbdk is a driver that enables remote-viewer exclusive access to USB devices on Windows operating systems. Installing usbdk requires Administrator privileges. Note that the previously supported USB Clerk option has been deprecated and is no longer supported. Installing usbdk on Windows Open a web browser and download one of the following installers according to the architecture of your system. usbdk for 32-bit Windows: https:// [your manager's address] /ovirt-engine/services/files/spice/usbdk-x86.msi usbdk for 64-bit Windows: https:// [your manager's address] /ovirt-engine/services/files/spice/usbdk-x64.msi Open the folder where the file was saved. Double-click the file. Click Run if prompted by a security warning. Click Yes if prompted by User Account Control. | [
"dnf install virt-viewer",
"https:// your-manager-fqdn /ovirt-engine/services/files/spice/virt-viewer-x86.msi",
"https:// your-manager-fqdn /ovirt-engine/services/files/spice/virt-viewer-x64.msi",
"https:// [your manager's address] /ovirt-engine/services/files/spice/usbdk-x86.msi",
"https:// [your manager's address] /ovirt-engine/services/files/spice/usbdk-x64.msi"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-Installing_Supporting_Components |
Chapter 9. Secure Linux Containers | Chapter 9. Secure Linux Containers Linux Containers ( LXC ) is a low-level virtualization feature that allows you to run multiple copies of the same service at the same time on a system. Compared to full virtualization, containers do not require an entire new system to boot, can use less memory, and can use the base operating system in a read-only manner. For example, LXC allow you to run multiple web servers simultaneously, each with their own data while sharing the system data, and even running as the root user. However, running a privileged process within a container could affect other processes running outside of the container or processes running in other containers. Secure Linux containers use the SELinux context, therefore preventing the processes running within them from interacting with each other or with the host. The Docker application is the main utility for managing Linux Containers in Red Hat Enterprise Linux. As an alternative, you can also use the virsh command-line utility provided by the libvirt package. For further details about Linux Containers, see Getting Started with Containers . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-containers |
Chapter 2. Deployment Scenarios | Chapter 2. Deployment Scenarios There are three deployment scenarios for Red Hat Satellite in Amazon Web Services: One region setup Connecting on-premise and AWS region Connecting different regions Figure 2.1. Scenario 1: One region setup The least complex configuration of Satellite Server in Amazon Web Services consists of both Satellite Server and the content hosts residing within the same region and within the Virtual Private Cloud (VPC). You can also use a different availability zone. Scenario 2: Connecting on-premise and AWS region Create a VPN connection between the on-premise location and the AWS region where the Capsule resides. It is also possible to use the external host name of Satellite Server when you register the instance which runs Capsule Server. Option 1: Site-to-Site VPN connection between the AWS region and the On-Premise Datacenter Option 2: Direct connection using the External DNS host name Scenario 3: Connecting different regions Create a site-to-site VPN connection between the different regions so that you can use the Internal DNS host name when you register the instance that runs Capsule Server to Satellite Server. If you do not establish a site-to-site VPN connection, use the external DNS host name when you register the instance that runs Capsule Server to Satellite Server. Note Most Public Cloud Providers do not charge for data being transferred into a region, or between availability zones within a single region; however, they do charge for data leaving the region to the Internet. Option 1: Site-to-Site VPN connection between AWS regions Option 2: Direct connection using the External DNS host name | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/deploying_red_hat_satellite_on_amazon_web_services/deployment_scenarios |
Chapter 7. Geo-replication | Chapter 7. Geo-replication Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. Additional resources For more information about the geo-replication feature's architecture, see the architecture guide , which includes technical diagrams and a high-level overview. 7.1. Geo-replication features When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region. After the initial push, image data will be replicated in the background to other storage engines. The list of replication locations is configurable and those can be different storage backends. An image pull will always use the closest available storage engine, to maximize pull performance. If replication has not been completed yet, the pull will use the source storage backend instead. 7.2. Geo-replication requirements and constraints In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region's object storage. Object storage must be geographically accessible by all other regions. In case of an object storage system failure of one geo-replicating site, that site's Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures. Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status. To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health. If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites. Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure. A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions. Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database. A single Redis cache is shared across the entire Red Hat Quay setup and needs to accessible by all Red Hat Quay pods. The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Geo-replication requires object storage in each region. It does not work with local storage. Each region must be able to access every storage engine in each region, which requires a network path. Alternatively, the storage proxy option can be used. The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image. All Red Hat Quay instances must share the same entrypoint, typically through a load balancer. All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file. Geo-replication requires your Clair configuration to be set to unmanaged . An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration . Geo-Replication requires SSL/TLS certificates and keys. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions. 7.2.1. Setting up geo-replication on OpenShift Container Platform Use the following procedure to set up geo-replication on OpenShift Container Platform. Procedure Deploy a postgres instance for Red Hat Quay. Login to the database by entering the following command: psql -U <username> -h <hostname> -p <port> -d <database_name> Create a database for Red Hat Quay named quay . For example: CREATE DATABASE quay; Enable pg_trm extension inside the database \c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm; Deploy a Redis instance: Note Deploying a Redis instance might be unnecessary if your cloud provider has its own service. Deploying a Redis instance is required if you are leveraging Builders. Deploy a VM for Redis Verify that it is accessible from the clusters where Red Hat Quay is running Port 6379/TCP must be open Run Redis inside the instance sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis Create two object storage backends, one for each cluster. Ideally, one object storage bucket will be close to the first, or primary, cluster, and the other will run closer to the second, or secondary, cluster. Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster. Configure a load balancer to provide a single entry point to the clusters. 7.2.1.1. Configuring geo-replication for the Red Hat Quay Operator on OpenShift Container Platform Use the following procedure to configure geo-replication for the Red Hat Quay Operator. Procedure Create a config.yaml file that is shared between clusters. This config.yaml file contains the details for the common PostgreSQL, Redis and storage backends: Geo-replication config.yaml file SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true 1 A proper SERVER_HOSTNAME must be used for the route and must match the hostname of the global load balancer. 2 To retrieve the configuration file for a Clair instance deployed using the OpenShift Container Platform Operator, see Retrieving the Clair config . Create the configBundleSecret by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle In each of the clusters, set the configBundleSecret and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environmental variable override to configure the appropriate storage for that cluster. For example: Note The config.yaml file between both deployments must match. If making a change to one cluster, it must also be changed in the other. US cluster QuayRegistry example apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes . European cluster apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage Note Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes . 7.2.2. Mixed storage for geo-replication Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following: A VPN to prevent visibility of the internal storage, or A token pair that only allows access to the specified bucket used by Red Hat Quay This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication. 7.3. Upgrading a geo-replication deployment of the Red Hat Quay Operator Use the following procedure to upgrade your geo-replicated Red Hat Quay Operator. Important When upgrading geo-replicated Red Hat Quay Operator deployments to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay Operator deployment before upgrading. Procedure This procedure assumes that you are running the Red Hat Quay Operator on three (or more) systems. For this procedure, we will assume three systems named System A, System B, and System C . System A will serve as the primary system in which the Red Hat Quay Operator is deployed. On System B and System C, scale down your Red Hat Quay Operator deployment. This is done by disabling auto scaling and overriding the replica county for Red Hat Quay, mirror workers, and Clair (if it is managed). Use the following quayregistry.yaml file as a reference: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ... 1 Disable auto scaling of Quay, Clair and Mirroring workers 2 Set the replica count to 0 for components accessing the database and objectstorage Note You must keep the Red Hat Quay Operator running on System A. Do not update the quayregistry.yaml file on System A. Wait for the registry-quay-app , registry-quay-mirror , and registry-clair-app pods to disappear. Enter the following command to check their status: oc get pods -n <quay-namespace> Example output quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-config-editor-6dfdcfc44f-hlvwm 1/1 Running 0 73s quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m On System A, initiate a Red Hat Quay Operator upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators . For more information about Red Hat Quay upgrade paths, see Upgrading the Red Hat Quay Operator . After the new Red Hat Quay Operator is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Red Hat Quay pods are started with the latest y-stream version. Additionally, new Quay pods are scheduled and started. Confirm that the update has properly worked by navigating to the Red Hat Quay UI: In the OpenShift console, navigate to Operators Installed Operators , and click the Registry Endpoint link. Important Do not execute the following step until the Red Hat Quay UI is available. Do not upgrade the Red Hat Quay Operator on System B and on System C until the UI is available on System A. After confirming that the update has properly worked on System A, initiate the Red Hat Quay Operator on System B and on System C. The Operator upgrade results in an upgraded Red Hat Quay installation, and the pods are restarted. Note Because the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start. 7.3.1. Removing a geo-replicated site from your Red Hat Quay Operator deployment By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup. Prerequisites You are logged into OpenShift Container Platform. You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage . Each site has its own Organization, Repository, and image tags. Procedure Sync the blobs between all of your defined sites by running the following command: USD python -m util.backfillreplication Warning Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites. When running this command, replication jobs are created which are picked up by the replication worker. If there are blobs that need replicated, the script returns UUIDs of blobs that will be replicated. If you run this command multiple times, and the output from the return script is empty, it does not mean that the replication process is done; it means that there are no more blobs to be queued for replication. Customers should use appropriate judgement before proceeding, as the allotted time replication takes depends on the number of blobs detected. Alternatively, you could use a third party cloud tool, such as Microsoft Azure, to check the synchronization status. This step must be completed before proceeding. In your Red Hat Quay config.yaml file for site usstorage , remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site. Enter the following command to identify your Quay application pods: USD oc get pod -n <quay_namespace> Example output quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm Enter the following command to open an interactive shell session in the usstorage pod: USD oc rsh quay390usstorage-quay-app-5779ddc886-2drh2 Enter the following command to permanently remove the eustorage site: Important The following action cannot be undone. Use with caution. sh-4.4USD python -m util.removelocation eustorage Example output WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage | [
"psql -U <username> -h <hostname> -p <port> -d <database_name>",
"CREATE DATABASE quay;",
"\\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;",
"sudo dnf install -y podman run -d --name redis -p 6379:6379 redis",
"SERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:[email protected]:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true",
"oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: ... - kind: horizontalpodautoscaler managed: false 1 - kind: quay managed: true overrides: 2 replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 ...",
"get pods -n <quay-namespace>",
"quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-config-editor-6dfdcfc44f-hlvwm 1/1 Running 0 73s quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m",
"python -m util.backfillreplication",
"oc get pod -n <quay_namespace>",
"quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm",
"oc rsh quay390usstorage-quay-app-5779ddc886-2drh2",
"sh-4.4USD python -m util.removelocation eustorage",
"WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_operator_features/georepl-intro |
F.2. Configuring HA-LVM Failover with Tagging | F.2. Configuring HA-LVM Failover with Tagging To set up HA-LVM failover by using tags in the /etc/lvm/lvm.conf file, perform the following steps: In the global section of the /etc/lvm/lvm.conf file, ensure that the locking_type parameter is set to the value '1' and the use_lvmetad parameter is set to the value '0'. Note As of Red Hat Enterprise Linux 6.7, you can use the --enable-halvm option of the lvmconf to set the locking type to 1 and disable lvmetad . For information on the lvmconf command, see the lvmconf man page. Create the logical volume and file system using standard LVM and file system commands, as in the following example. For information on creating LVM logical volumes, refer to Logical Volume Manager Administration . Edit the /etc/cluster/cluster.conf file to include the newly created logical volume as a resource in one of your services. Alternately, you can use Conga or the ccs command to configure LVM and file system resources for the cluster. The following is a sample resource manager section from the /etc/cluster/cluster.conf file that configures a CLVM logical volume as a cluster resource: Note If there are multiple logical volumes in the volume group, then the logical volume name ( lv_name ) in the lvm resource should be left blank or unspecified. Also note that in an HA-LVM configuration, a volume group may be used by only a single service. Edit the volume_list field in the /etc/lvm/lvm.conf file. Include the name of your root volume group and your host name as listed in the /etc/cluster/cluster.conf file preceded by @. The host name to include here is the machine on which you are editing the lvm.conf file, not any remote host name. Note that this string MUST match the node name given in the cluster.conf file. Below is a sample entry from the /etc/lvm/lvm.conf file: This tag will be used to activate shared VGs or LVs. DO NOT include the names of any volume groups that are to be shared using HA-LVM. Update the initramfs device on all your cluster nodes: Reboot all nodes to ensure the correct initramfs image is in use. | [
"pvcreate /dev/sd[cde]1 vgcreate shared_vg /dev/sd[cde]1 lvcreate -L 10G -n ha_lv shared_vg mkfs.ext4 /dev/shared_vg/ha_lv",
"<rm> <failoverdomains> <failoverdomain name=\"FD\" ordered=\"1\" restricted=\"0\"> <failoverdomainnode name=\"neo-01\" priority=\"1\"/> <failoverdomainnode name=\"neo-02\" priority=\"2\"/> </failoverdomain> </failoverdomains> <resources> <lvm name=\"lvm\" vg_name=\"shared_vg\" lv_name=\"ha_lv\"/> <fs name=\"FS\" device=\"/dev/shared_vg/ha_lv\" force_fsck=\"0\" force_unmount=\"1\" fsid=\"64050\" fstype=\"ext4\" mountpoint=\"/mnt\" options=\"\" self_fence=\"0\"/> </resources> <service autostart=\"1\" domain=\"FD\" name=\"serv\" recovery=\"relocate\"> <lvm ref=\"lvm\"/> <fs ref=\"FS\"/> </service> </rm>",
"volume_list = [ \"VolGroup00\", \"@neo-01\" ]",
"dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-halvm-tagging-CA |
Chapter 1. Overview of Builds | Chapter 1. Overview of Builds Builds is an extensible build framework based on the Shipwright project , which you can use to build container images on an OpenShift Dedicated cluster. You can build container images from source code and Dockerfiles by using image build tools, such as Source-to-Image (S2I) and Buildah. You can create and apply build resources, view logs of build runs, and manage builds in your OpenShift Dedicated namespaces. Builds includes the following capabilities: Standard Kubernetes-native API for building container images from source code and Dockerfiles Support for Source-to-Image (S2I) and Buildah build strategies Extensibility with your own custom build strategies Execution of builds from source code in a local directory Shipwright CLI for creating and viewing logs, and managing builds on the cluster Integrated user experience with the Developer perspective of the OpenShift Dedicated web console Note Because Builds releases on a different cadence from OpenShift Dedicated, the Builds documentation is now available as a separate documentation set at builds for Red Hat OpenShift . | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/builds_using_shipwright/overview-openshift-builds |
Performing disaster recovery with Identity Management | Performing disaster recovery with Identity Management Red Hat Enterprise Linux 9 Recovering IdM after a server or data loss Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/performing_disaster_recovery_with_identity_management/index |
8.61. fontconfig | 8.61. fontconfig 8.61.1. RHBA-2014:0554 - fontconfig bug fix update Updated fontconfig packages that fix one bug are now available for Red Hat Enterprise Linux 6. The fontconfig packages contain the font configuration and customization library, which is designed to locate fonts within the system and select them according to the requirements specified by the applications. Bug Fixes BZ# 1035416 Previously, when the font cache file was stored on a Network File System (NFS), the fontconfig library sometimes did not handle mmap() calls correctly. As a consequence, applications using fontconfig, for example the GNOME terminal, could terminate unexpectedly with a bus error. With this update, the FONTCONFIG_USE_MMAP environment variable has been added to handle the mmap() calls regardless of the file system, and these calls are no longer used if the cache file is stored on an NFS. As a result, the bus errors no longer occur in the described situation. BZ# 1099546 Previously, the 25-no-bitmap-fedora.conf file name contained the word 'fedora', although file names in Red Hat Enterprise Linux are not supposed to include the word 'Fedora'. With this update, 25-no-bitmap-fedora.conf has been renamed to 25-no-bitmap-dist.conf, and the spec file has been updated. Users of fontconfig are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/fontconfig |
Chapter 2. Installing OpenShift on a single node | Chapter 2. Installing OpenShift on a single node You can install single-node OpenShift by using either the web-based Assisted Installer or the coreos-installer tool to generate a discovery ISO image. The discovery ISO image writes the Red Hat Enterprise Linux CoreOS (RHCOS) system configuration to the target installation disk, so that you can run a single-cluster node to meet your needs. Consider using single-node OpenShift when you want to run a cluster in a low-resource or an isolated environment for testing, troubleshooting, training, or small-scale project purposes. 2.1. Installing single-node OpenShift using the Assisted Installer To install OpenShift Container Platform on a single node, use the web-based Assisted Installer wizard to guide you through the process and manage the installation. See the Assisted Installer for OpenShift Container Platform documentation for details and configuration options. 2.1.1. Generating the discovery ISO with the Assisted Installer Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer can generate. Procedure On the administration host, open a browser and navigate to Red Hat OpenShift Cluster Manager . Click Create New Cluster to create a new cluster. In the Cluster name field, enter a name for the cluster. In the Base domain field, enter a base domain. For example: All DNS records must be subdomains of this base domain and include the cluster name, for example: Note You cannot change the base domain or cluster name after cluster installation. Select Install single node OpenShift (SNO) and complete the rest of the wizard steps. Download the discovery ISO. Complete the remaining Assisted Installer wizard steps. Important Ensure that you take note of the discovery ISO URL for installing with virtual media. If you enable OpenShift Virtualization during this process, you must have a second local storage device of at least 50GiB for your virtual machines. Additional resources Persistent storage using logical volume manager storage What you can do with OpenShift Virtualization 2.1.2. Installing single-node OpenShift with the Assisted Installer Use the Assisted Installer to install the single-node cluster. Prerequisites Ensure that the boot drive order in the server BIOS settings defaults to booting the server from the target installation disk. Procedure Attach the discovery ISO image to the target host. Boot the server from the discovery ISO image. The discovery ISO image writes the system configuration to the target installation disk and automatically triggers a server restart. On the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. If necessary, reload the Assisted Clusters page and select the cluster name. Complete the install wizard steps. Add networking details, including a subnet from the available subnets. Add the SSH public key if necessary. Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the operating system image to the server's hard disk, the server restarts. Optional: Remove the discovery ISO image. The server restarts several times automatically, deploying the control plane. Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 2.2. Installing single-node OpenShift manually To install OpenShift Container Platform on a single node, first generate the installation ISO, and then boot the server from the ISO. You can monitor the installation using the openshift-install installation program. Additional resources Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Configuring DHCP or static IP addresses 2.2.1. Generating the installation ISO with coreos-installer Installing OpenShift Container Platform on a single node requires an installation ISO, which you can generate with the following procedure. Prerequisites Install podman . Note See "Requirements for installing OpenShift on a single node" for networking requirements, including DNS records. Procedure Set the OpenShift Container Platform version: USD export OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.16 Set the host architecture: USD export ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL by running the following command: USD export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL -o rhcos-live.iso Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Embed the ignition data into the RHCOS ISO by running the following commands: USD alias coreos-installer='podman run --privileged --pull always --rm \ -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data \ -w /data quay.io/coreos/coreos-installer:release' USD coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso Additional resources See Requirements for installing OpenShift on a single node for more information about installing OpenShift Container Platform on a single node. See Cluster capabilities for more information about enabling cluster capabilities that were disabled before installation. See Optional cluster capabilities in OpenShift Container Platform 4.16 for more information about the features provided by each capability. 2.2.2. Monitoring the cluster installation using openshift-install Use openshift-install to monitor the progress of the single-node cluster installation. Prerequisites Ensure that the boot drive order in the server BIOS settings defaults to booting the server from the target installation disk. Procedure Attach the discovery ISO image to the target host. Boot the server from the discovery ISO image. The discovery ISO image writes the system configuration to the target installation disk and automatically triggers a server restart. On the administration host, monitor the installation by running the following command: USD ./openshift-install --dir=ocp wait-for install-complete Optional: Remove the discovery ISO image. The server restarts several times while deploying the control plane. Verification After the installation is complete, check the environment by running the following command: USD export KUBECONFIG=ocp/auth/kubeconfig USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.29.4 Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 2.3. Installing single-node OpenShift on cloud providers 2.3.1. Additional requirements for installing single-node OpenShift on a cloud provider The documentation for installer-provisioned installation on cloud providers is based on a high availability cluster consisting of three control plane nodes. When referring to the documentation, consider the differences between the requirements for a single-node OpenShift cluster and a high availability cluster. A high availability cluster requires a temporary bootstrap machine, three control plane machines, and at least two compute machines. For a single-node OpenShift cluster, you need only a temporary bootstrap machine and one cloud instance for the control plane node and no compute nodes. The minimum resource requirements for high availability cluster installation include a control plane node with 4 vCPUs and 100GB of storage. For a single-node OpenShift cluster, you must have a minimum of 8 vCPUs and 120GB of storage. The controlPlane.replicas setting in the install-config.yaml file should be set to 1 . The compute.replicas setting in the install-config.yaml file should be set to 0 . This makes the control plane node schedulable. 2.3.2. Supported cloud providers for single-node OpenShift The following table contains a list of supported cloud providers and CPU architectures. Table 2.1. Supported cloud providers Cloud provider CPU architecture Amazon Web Service (AWS) x86_64 and AArch64 Microsoft Azure x86_64 Google Cloud Platform (GCP) x86_64 and AArch64 2.3.3. Installing single-node OpenShift on AWS Installing a single-node cluster on AWS requires installer-provisioned installation using the "Installing a cluster on AWS with customizations" procedure. Additional resources Installing a cluster on AWS with customizations 2.3.4. Installing single-node OpenShift on Azure Installing a single node cluster on Azure requires installer-provisioned installation using the "Installing a cluster on Azure with customizations" procedure. Additional resources Installing a cluster on Azure with customizations 2.3.5. Installing single-node OpenShift on GCP Installing a single node cluster on GCP requires installer-provisioned installation using the "Installing a cluster on GCP with customizations" procedure. Additional resources Installing a cluster on GCP with customizations 2.4. Creating a bootable ISO image on a USB drive You can install software using a bootable USB drive that contains an ISO image. Booting the server with the USB drive prepares the server for the software installation. Procedure On the administration host, insert a USB drive into a USB port. Create a bootable USB drive, for example: # dd if=<path_to_iso> of=<path_to_usb> status=progress where: <path_to_iso> is the relative path to the downloaded ISO file, for example, rhcos-live.iso . <path_to_usb> is the location of the connected USB drive, for example, /dev/sdb . After the ISO is copied to the USB drive, you can use the USB drive to install software on the server. 2.5. Booting from an HTTP-hosted ISO image using the Redfish API You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API. Note This example procedure demonstrates the steps on a Dell server. Important Ensure that you have the latest firmware version of iDRAC that is compatible with your hardware. If you have any issues with the hardware or firmware, you must contact the provider. Prerequisites Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO. Use a Dell PowerEdge server that is compatible with iDRAC9. Procedure Copy the ISO file to an HTTP server accessible in your network. Boot the host from the hosted ISO file, for example: Call the Redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"Image":"<hosted_iso_file>", "Inserted": true}' -H "Content-Type: application/json" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia Where: <bmc_username>:<bmc_password> Is the username and password for the target host BMC. <hosted_iso_file> Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso . The ISO must be accessible from the target host machine. <host_bmc_address> Is the BMC IP address of the target host machine. Set the host to boot from the VirtualMedia device by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1 Reboot the host: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "ForceRestart"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "On"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset 2.6. Creating a custom live RHCOS ISO for remote server access In some cases, you cannot attach an external disk drive to a server, however, you need to access the server remotely to provision a node. It is recommended to enable SSH access to the server. You can create a live RHCOS ISO with SSHd enabled and with predefined credentials so that you can access the server after it boots. Prerequisites You installed the butane utility. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Download the latest live RHCOS ISO from mirror.openshift.com . Create the embedded.yaml file that the butane utility uses to create the Ignition file: variant: openshift version: 4.16.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>' 1 The core user has sudo privileges. Run the butane utility to create the Ignition file using the following command: USD butane -pr embedded.yaml -o embedded.ign After the Ignition file is created, you can include the configuration in a new live RHCOS ISO, which is named rhcos-sshd-4.16.0-x86_64-live.x86_64.iso , with the coreos-installer utility: USD coreos-installer iso ignition embed -i embedded.ign rhcos-4.16.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.16.0-x86_64-live.x86_64.iso Verification Check that the custom live ISO can be used to boot the server by running the following command: # coreos-installer iso ignition show rhcos-sshd-4.16.0-x86_64-live.x86_64.iso Example output { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]" ] } ] } } 2.7. Installing single-node OpenShift with IBM Z and IBM LinuxONE Installing a single-node cluster on IBM Z(R) and IBM(R) LinuxONE requires user-provisioned installation using one of the following procedures: Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE Note Installing a single-node cluster on IBM Z(R) simplifies installation for development and test environments and requires less resource requirements at entry level. Hardware requirements The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. 2.7.1. Installing single-node OpenShift with z/VM on IBM Z and IBM LinuxONE Prerequisites You have installed podman . Procedure Set the OpenShift Container Platform version by running the following command: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version. For example, latest-4.16 . Set the host architecture by running the following command: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture s390x . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Obtain the RHEL kernel , initramfs , and rootfs artifacts from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel rhcos-<version>-live-kernel-<architecture> initramfs rhcos-<version>-live-initramfs.<architecture>.img rootfs rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Move the following artifacts and files to an HTTP or HTTPS server: Downloaded RHEL live kernel , initramfs , and rootfs artifacts Ignition files Create parameter files for a particular virtual machine: Example parameter file cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ ignition.config.url=http://<http_server>:8080/ignition/bootstrap-in-place-for-live-iso.ign \ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 ip=<ip>::<gateway>:<mask>:<hostname>::none nameserver=<dns> \ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.dasd=0.0.4411 \ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 5 zfcp.allow_lun_scan=0 1 For the ignition.config.url= parameter, specify the Ignition file for the machine role. Only HTTP and HTTPS protocols are supported. 2 For the coreos.live.rootfs_url= artifact, specify the matching rootfs artifact for the kernel`and `initramfs you are booting. Only HTTP and HTTPS protocols are supported. 3 For the ip= parameter, assign the IP address automatically using DHCP or manually as described in "Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE". 4 For installations on DASD-type disks, use rd.dasd= to specify the DASD where RHCOS is to be installed. Omit this entry for FCP-type disks. 5 For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks. Leave all other parameters unchanged. Transfer the following artifacts, files, and images to z/VM. For example by using FTP: kernel and initramfs artifacts Parameter files RHCOS images For details about how to transfer the files with FTP and boot from the virtual reader, see Installing under Z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine that is to become your bootstrap node. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader by running the following command: After the first reboot of the virtual machine, run the following commands directly after one another: To boot a DASD device after first reboot, run the following commands: USD cp i <devno> clear loadparm prompt where: <devno> Specifies the device number of the boot device as seen by the guest. USD cp vi vmsg 0 <kernel_parameters> where: <kernel_parameters> Specifies a set of kernel parameters to be stored as system control program data (SCPDATA). When booting Linux, these kernel parameters are concatenated to the end of the existing kernel parameters that are used by your boot configuration. The combined parameter string must not exceed 896 characters. To boot an FCP device after first reboot, run the following commands: USD cp set loaddev portname <wwpn> lun <lun> where: <wwpn> Specifies the target port and <lun> the logical unit in hexadecimal format. USD cp set loaddev bootprog <n> where: <n> Specifies the kernel to be booted. USD cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>' where: <kernel_parameters> Specifies a set of kernel parameters to be stored as system control program data (SCPDATA). When booting Linux, these kernel parameters are concatenated to the end of the existing kernel parameters that are used by your boot configuration. The combined parameter string must not exceed 896 characters. <APPEND|NEW> Optional: Specify APPEND to append kernel parameters to existing SCPDATA. This is the default. Specify NEW to replace existing SCPDATA. Example USD cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000 ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1' To start the IPL and boot process, run the following command: USD cp i <devno> where: <devno> Specifies the device number of the boot device as seen by the guest. 2.7.2. Installing single-node OpenShift with RHEL KVM on IBM Z and IBM LinuxONE Prerequisites You have installed podman . Procedure Set the OpenShift Container Platform version by running the following command: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version. For example, latest-4.16 . Set the host architecture by running the following command: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture s390x . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Obtain the RHEL kernel , initramfs , and rootfs artifacts from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel rhcos-<version>-live-kernel-<architecture> initramfs rhcos-<version>-live-initramfs.<architecture>.img rootfs rhcos-<version>-live-rootfs.<architecture>.img Before you launch virt-install , move the following files and artifacts to an HTTP or HTTPS server: Downloaded RHEL live kernel , initramfs , and rootfs artifacts Ignition files Create the KVM guest nodes by using the following components: RHEL kernel and initramfs artifacts Ignition files The new disk image Adjusted parm line arguments USD virt-install \ --name <vm_name> \ --autostart \ --memory=<memory_mb> \ --cpu host \ --vcpus <vcpus> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ 1 --disk size=100 \ --network network=<virt_network_parm> \ --graphics none \ --noautoconsole \ --extra-args "rd.neednet=1 ignition.platform.id=metal ignition.firstboot" \ --extra-args "ignition.config.url=http://<http_server>/bootstrap.ign" \ 2 --extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ 3 --extra-args "ip=<ip>::<gateway>:<mask>:<hostname>::none" \ 4 --extra-args "nameserver=<dns>" \ --extra-args "console=ttysclp0" \ --wait 1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. 2 Specify the location of the bootstrap.ign config file. Only HTTP and HTTPS protocols are supported. 3 For the coreos.live.rootfs_url= artifact, specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For the ip= parameter, assign the IP address manually as described in "Installing a cluster with RHEL KVM on IBM Z(R) and IBM(R) LinuxONE". 2.7.3. Installing single-node OpenShift in an LPAR on IBM Z and IBM LinuxONE Prerequisites If you are deploying a single-node cluster there are zero compute nodes, the Ingress Controller pods run on the control plane nodes. In single-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. Procedure Set the OpenShift Container Platform version by running the following command: USD OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version. For example, latest-4.16 . Set the host architecture by running the following command: USD ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture s390x . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxvf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 7 sshKey: | <ssh_key> 8 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 8 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to true . Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to true as shown in the following spec stanza: spec: mastersSchedulable: true status: {} Save and exit the file. Create the Ignition configuration files by running the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Obtain the RHEL kernel , initramfs , and rootfs artifacts from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel rhcos-<version>-live-kernel-<architecture> initramfs rhcos-<version>-live-initramfs.<architecture>.img rootfs rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Move the following artifacts and files to an HTTP or HTTPS server: Downloaded RHEL live kernel , initramfs , and rootfs artifacts Ignition files Create a parameter file for the bootstrap in an LPAR: Example parameter file for the bootstrap machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ 4 rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 \ rd.dasd=0.0.4411 \ 5 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ 6 zfcp.allow_lun_scan=0 1 Specify the block device on the system to install to. For installations on DASD-type disk use dasda , for installations on FCP-type disks use sda . 2 Specify the location of the bootstrap.ign config file. Only HTTP and HTTPS protocols are supported. 3 For the coreos.live.rootfs_url= artifact, specify the matching rootfs artifact for the kernel`and `initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For the ip= parameter, assign the IP address manually as described in "Installing a cluster in an LPAR on IBM Z(R) and IBM(R) LinuxONE". 5 For installations on DASD-type disks, use rd.dasd= to specify the DASD where RHCOS is to be installed. Omit this entry for FCP-type disks. 6 For installations on FCP-type disks, use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. Omit this entry for DASD-type disks. You can adjust further parameters if required. Create a parameter file for the control plane in an LPAR: Example parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 \ rd.dasd=0.0.4411 \ rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \ zfcp.allow_lun_scan=0 1 Specify the location of the master.ign config file. Only HTTP and HTTPS protocols are supported. Transfer the following artifacts, files, and images to the LPAR. For example by using FTP: kernel and initramfs artifacts Parameter files RHCOS images For details about how to transfer the files with FTP and boot, see Installing in an LPAR . Boot the bootstrap machine. Boot the control plane machine. 2.8. Installing single-node OpenShift with IBM Power Installing a single-node cluster on IBM Power(R) requires user-provisioned installation using the "Installing a cluster with IBM Power(R)" procedure. Note Installing a single-node cluster on IBM Power(R) simplifies installation for development and test environments and requires less resource requirements at entry level. Hardware requirements The equivalent of two Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to connect to the LoadBalancer service and to serve data for traffic outside of the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Power(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Additional resources Installing a cluster on IBM Power(R) 2.8.1. Setting up basion for single-node OpenShift with IBM Power Prior to installing single-node OpenShift on IBM Power(R), you must set up bastion. Setting up a bastion server for single-node OpenShift on IBM Power(R) requires the configuration of the following services: PXE is used for the single-node OpenShift cluster installation. PXE requires the following services to be configured and run: DNS to define api, api-int, and *.apps DHCP service to enable PXE and assign an IP address to single-node OpenShift node HTTP to provide ignition and RHCOS rootfs image TFTP to enable PXE You must install dnsmasq to support DNS, DHCP and PXE, httpd for HTTP. Use the following procedure to configure a bastion server that meets these requirements. Procedure Use the following command to install grub2 , which is required to enable PXE for PowerVM: grub2-mknetdir --net-directory=/var/lib/tftpboot Example /var/lib/tftpboot/boot/grub2/grub.cfg file default=0 fallback=1 timeout=1 if [ USD{net_default_mac} == fa:b0:45:27:43:20 ]; then menuentry "CoreOS (BIOS)" { echo "Loading kernel" linux "/rhcos/kernel" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign echo "Loading initrd" initrd "/rhcos/initramfs.img" } fi Use the following commands to download RHCOS image files from the mirror repo for PXE. Enter the following command to assign the RHCOS_URL variable the follow 4.12 URL: USD export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/ Enter the following command to navigate to the /var/lib/tftpboot/rhcos directory: USD cd /var/lib/tftpboot/rhcos Enter the following command to download the specified RHCOS kernel file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel Enter the following command to download the RHCOS initramfs file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img Enter the following command to navigate to the /var//var/www/html/install/ directory: USD cd /var//var/www/html/install/ Enter the following command to download, and save, the RHCOS root filesystem image file from the URL stored in the RHCOS_URL variable: USD wget USD{RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img To create the ignition file for a single-node OpenShift cluster, you must create the install-config.yaml file. Enter the following command to create the work directory that holds the file: USD mkdir -p ~/sno-work Enter the following command to navigate to the ~/sno-work directory: USD cd ~/sno-work Use the following sample file can to create the required install-config.yaml in the ~/sno-work directory: apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9 1 Add the cluster domain name. 2 Set the compute replicas to 0 . This makes the control plane node schedulable. 3 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures that the cluster runs on a single node. 4 Set the metadata name to the cluster name. 5 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 6 Set the cidr value to match the subnet of the single-node OpenShift cluster. 7 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 8 Copy the pull secret from Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 9 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Download the openshift-install image to create the ignition file and copy it to the http directory. Enter the following command to download the openshift-install-linux-4.12.0 .tar file: USD wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz Enter the following command to unpack the openshift-install-linux-4.12.0.tar.gz archive: USD tar xzvf openshift-install-linux-4.12.0.tar.gz Enter the following command to USD ./openshift-install --dir=~/sno-work create create single-node-ignition-config Enter the following command to create the ignition file: USD cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign Enter the following command to restore SELinux file for the /var/www/html directory: USD restorecon -vR /var/www/html || true Bastion now has all the required files and is properly configured in order to install single-node OpenShift. 2.8.2. Installing single-node OpenShift with IBM Power Prerequisites You have set up bastion. Procedure There are two steps for the single-node OpenShift cluster installation. First the single-node OpenShift logical partition (LPAR) needs to boot up with PXE, then you need to monitor the installation progress. Use the following command to boot powerVM with netboot: USD lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name> where: sno_mac Specifies the MAC address of the single-node OpenShift cluster. sno_ip Specifies the IP address of the single-node OpenShift cluster. server_ip Specifies the IP address of bastion (PXE server). gateway Specifies the Network's gateway IP. lpar_name Specifies the single-node OpenShift lpar name in HMC. cec_name Specifies the System name where the sno_lpar resides After the single-node OpenShift LPAR boots up with PXE, use the openshift-install command to monitor the progress of installation: Run the following command after the bootstrap is complete: ./openshift-install wait-for bootstrap-complete Run the following command after it returns successfully: ./openshift-install wait-for install-complete | [
"example.com",
"<cluster_name>.example.com",
"export OCP_VERSION=<ocp_version> 1",
"export ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)",
"curl -L USDISO_URL -o rhcos-live.iso",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'",
"coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso",
"./openshift-install --dir=ocp wait-for install-complete",
"export KUBECONFIG=ocp/auth/kubeconfig",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.29.4",
"dd if=<path_to_iso> of=<path_to_usb> status=progress",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia",
"curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"variant: openshift version: 4.16.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>'",
"butane -pr embedded.yaml -o embedded.ign",
"coreos-installer iso ignition embed -i embedded.ign rhcos-4.16.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.16.0-x86_64-live.x86_64.iso",
"coreos-installer iso ignition show rhcos-sshd-4.16.0-x86_64-live.x86_64.iso",
"{ \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]\" ] } ] } }",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal ignition.config.url=http://<http_server>:8080/ignition/bootstrap-in-place-for-live-iso.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<mask>:<hostname>::none nameserver=<dns> \\ 3 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.dasd=0.0.4411 \\ 4 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 5 zfcp.allow_lun_scan=0",
"cp ipl c",
"cp i <devno> clear loadparm prompt",
"cp vi vmsg 0 <kernel_parameters>",
"cp set loaddev portname <wwpn> lun <lun>",
"cp set loaddev bootprog <n>",
"cp set loaddev scpdata {APPEND|NEW} '<kernel_parameters>'",
"cp set loaddev scpdata 'rd.zfcp=0.0.8001,0x500507630a0350a4,0x4000409D00000000 ip=encbdd0:dhcp::02:00:00:02:34:02 rd.neednet=1'",
"cp i <devno>",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install --dir=ocp create single-node-ignition-config",
"virt-install --name <vm_name> --autostart --memory=<memory_mb> --cpu host --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 1 --disk size=100 --network network=<virt_network_parm> --graphics none --noautoconsole --extra-args \"rd.neednet=1 ignition.platform.id=metal ignition.firstboot\" --extra-args \"ignition.config.url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<mask>:<hostname>::none\" \\ 4 --extra-args \"nameserver=<dns>\" --extra-args \"console=ttysclp0\" --wait",
"OCP_VERSION=<ocp_version> 1",
"ARCH=<architecture> 1",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-client-linux.tar.gz -o oc.tar.gz",
"tar zxvf oc.tar.gz",
"chmod +x oc",
"curl -k https://mirror.openshift.com/pub/openshift-v4/USD{ARCH}/clients/ocp/USD{OCP_VERSION}/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz",
"tar zxvf openshift-install-linux.tar.gz",
"chmod +x openshift-install",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} pullSecret: '<pull_secret>' 7 sshKey: | <ssh_key> 8",
"mkdir ocp",
"cp install-config.yaml ocp",
"./openshift-install create manifests --dir <installation_directory> 1",
"spec: mastersSchedulable: true status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \\ 4 rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 rd.dasd=0.0.4411 \\ 5 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 \\ 6 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.1140,0.0.1141,0.0.1142,layer2=1,portno=0 rd.dasd=0.0.4411 rd.zfcp=0.0.8001,0x50050763040051e3,0x4000406300000000 zfcp.allow_lun_scan=0",
"grub2-mknetdir --net-directory=/var/lib/tftpboot",
"default=0 fallback=1 timeout=1 if [ USD{net_default_mac} == fa:b0:45:27:43:20 ]; then menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://192.168.10.5:8000/install/rootfs.img ignition.config.url=http://192.168.10.5:8000/ignition/sno.ign echo \"Loading initrd\" initrd \"/rhcos/initramfs.img\" } fi",
"export RHCOS_URL=https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/4.12/latest/",
"cd /var/lib/tftpboot/rhcos",
"wget USD{RHCOS_URL}/rhcos-live-kernel-ppc64le -o kernel",
"wget USD{RHCOS_URL}/rhcos-live-initramfs.ppc64le.img -o initramfs.img",
"cd /var//var/www/html/install/",
"wget USD{RHCOS_URL}/rhcos-live-rootfs.ppc64le.img -o rootfs.img",
"mkdir -p ~/sno-work",
"cd ~/sno-work",
"apiVersion: v1 baseDomain: <domain> 1 compute: - name: worker replicas: 0 2 controlPlane: name: master replicas: 1 3 metadata: name: <name> 4 networking: 5 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 6 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 7 pullSecret: '<pull_secret>' 8 sshKey: | <ssh_key> 9",
"wget https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.12.0/openshift-install-linux-4.12.0.tar.gz",
"tar xzvf openshift-install-linux-4.12.0.tar.gz",
"./openshift-install --dir=~/sno-work create create single-node-ignition-config",
"cp ~/sno-work/single-node-ignition-config.ign /var/www/html/ignition/sno.ign",
"restorecon -vR /var/www/html || true",
"lpar_netboot -i -D -f -t ent -m <sno_mac> -s auto -d auto -S <server_ip> -C <sno_ip> -G <gateway> <lpar_name> default_profile <cec_name>",
"./openshift-install wait-for bootstrap-complete",
"./openshift-install wait-for install-complete"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_a_single_node/install-sno-installing-sno |
Chapter 10. Ceph File System snapshot mirroring | Chapter 10. Ceph File System snapshot mirroring As a storage administrator, you can replicate a Ceph File System (CephFS) to a remote Ceph File System on another Red Hat Ceph Storage cluster. Prerequisites The source and the target storage clusters must be running Red Hat Ceph Storage 6.0 or later. The Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote CephFS on another Red Hat Ceph Storage cluster. Snapshot synchronization copies snapshot data to a remote Ceph File System, and creates a new snapshot on the remote target with the same name. You can configure specific directories for snapshot synchronization. Management of CephFS mirrors is done by the CephFS mirroring daemon ( cephfs-mirror ). This snapshot data is synchronized by doing a bulk copy to the remote CephFS. The chosen order of synchronizing snapshot pairs is based on the creation using the snap-id . Important Synchronizing hard links is not supported. Hard linked files get synchronized as regular files. The CephFS snapshot mirroring includes features, for example snapshot incarnation or high availability. These can be managed through Ceph Manager mirroring module, which is the recommended control interface. Ceph Manager Module and interfaces The Ceph Manager mirroring module is disabled by default. It provides interfaces for managing mirroring of directory snapshots. Ceph Manager interfaces are mostly wrappers around monitor commands for managing CephFS mirroring. They are the recommended control interface. The Ceph Manager mirroring module is implemented as a Ceph Manager plugin. It is responsible for assigning directories to the cephfs-mirror daemons for synchronization. The Ceph Manager mirroring module also provides a family of commands to control mirroring of directory snapshots. The mirroring module does not manage the cephfs-mirror daemons. The stopping, starting, restarting, and enabling of the cephfs-mirror daemons is controlled by systemctl , but managed by cephadm . Note Mirroring module commands use the fs snapshot mirror prefix as compared to the monitor commands with the fs mirror prefix. Assure that you are using the module command prefix to control the mirroring of directory snapshots. Snapshot incarnation A snapshot might be deleted and recreated with the same name and different content. The user could synchronize an "old" snapshot earlier and recreate the snapshot when the mirroring was disabled. Using snapshot names to infer the point-of-continuation would result in the "new" snapshot, an incarnation, never getting picked up for synchronization. Snapshots on the secondary file system store the snap-id of the snapshot it was synchronized from. This metadata is stored in the SnapInfo structure on the Ceph Metadata Server. High availability You can deploy multiple cephfs-mirror daemons on two or more nodes to achieve concurrency in synchronization of directory snapshots. When cephfs-mirror daemons are deployed or terminated, the Ceph Manager mirroring module discovers the modified set of cephfs-mirror daemons and rebalances the directory assignment amongst the new set thus providing high availability. cephfs-mirror daemons share the synchronization load using a simple M/N policy, where M is the number of directories and N is the number of cephfs-mirror daemons. Re-addition of Ceph File System mirror peers When re-adding or reassigning a peer to a CephFS in another cluster, ensure that all mirror daemons have stopped synchronization to the peer. You can verify this with the fs mirror status command. The Peer UUID should not show up in the command output. Purge synchronized directories from the peer before re-adding it to another CephFS, especially those directories which might exist in the new primary file system. This is not required if you are re-adding a peer to the same primary file system it was earlier synchronized from. Additional Resources See Viewing the mirror status for a Ceph File System for more details on the fs mirror status command. 10.1. Configuring a snapshot mirror for a Ceph File System You can configure a Ceph File System (CephFS) for mirroring to replicate snapshots to another CephFS on a remote Red Hat Ceph Storage cluster. Note The time taken for synchronizing to a remote storage cluster depends on the file size and the total number of files in the mirroring path. Prerequisites The source and the target storage clusters must be healthy and running Red Hat Ceph Storage 8.0 or later. Root-level access to a Ceph Monitor node in the source and the target storage clusters. At least one Ceph File System deployed on your storage cluster. Procedure Log into the Cephadm shell: Example On the source storage cluster, deploy the CephFS mirroring daemon: Syntax Example This command creates a Ceph user called, cephfs-mirror , and deploys the cephfs-mirror daemon on the given node. Optional: Deploy multiple CephFS mirroring daemons and achieve high availability: Syntax Example This example deploys three cephfs-mirror daemons on different hosts. Warning Do not separate the hosts with commas as it results in the following error: On the target storage cluster, create a user for each CephFS peer: Syntax Example On the source storage cluster, enable the CephFS mirroring module: Example On the source storage cluster, enable mirroring on a Ceph File System: Syntax Example Optional: Disable snapshot mirroring: Syntax Example Warning Disabling snapshot mirroring on a file system removes the configured peers. You have to import the peers again by bootstrapping them. Prepare the target peer storage cluster. On a target node, enable the mirroring Ceph Manager module: Example On the same target node, create the peer bootstrap: Syntax The SITE_NAME is a user-defined string to identify the target storage cluster. Example Copy the token string between the double quotes for use in the step. On the source storage cluster, import the bootstrap token from the target storage cluster: Syntax Example On the source storage cluster, list the CephFS mirror peers: Syntax Example Optional: Remove a snapshot peer: Syntax Example Note See Viewing the mirror status for a Ceph File System on how to find the peer UUID value. On the source storage cluster, configure a directory for snapshot mirroring: Syntax Example Important Only absolute paths inside the Ceph File System are valid. Note The Ceph Manager mirroring module normalizes the path. For example, the /d1/d2/../dN directories are equivalent to /d1/d2 . Once a directory has been added for mirroring, its ancestor directories and subdirectories are prevented from being added for mirroring. Optional: Stop snapshot mirroring for a directory: Syntax Example Additional Resources See the Viewing the mirror status for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more information. See the Ceph File System mirroring section in the Red Hat Ceph Storage File System Guide for more information. 10.2. Viewing the mirror status for a Ceph File System The Ceph File System (CephFS) mirror daemon ( cephfs-mirror ) gets asynchronous notifications about changes in the CephFS mirroring status, along with peer updates. The CephFS mirroring module provides a mirror daemon status interface to check mirror daemon status. For more detailed information, you can query the cephfs-mirror admin socket with commands to retrieve the mirror status and peer status. Prerequisites A running Red Hat Ceph Storage cluster. At least one deployment of a Ceph File System with mirroring enabled. Root-level access to the node running the CephFS mirroring daemon. Procedure Log into the Cephadm shell: Example Check the cephfs-mirror daemon status: Syntax Example For more detailed information, use the admin socket interface as detailed below. Find the Ceph File System ID on the node running the CephFS mirroring daemon: Syntax Example The Ceph File System ID in this example is cephfs@11 . Note When mirroring is disabled, the respective fs mirror status command for the file system does not show up in the help command. View the mirror status: Syntax Example 1 This is the unique peer UUID. View the peer status: Syntax Example The state can be one of these three values: 1 idle means the directory is currently not being synchronized. 2 syncing means the directory is currently being synchronized. 3 failed means the directory has hit the upper limit of consecutive failures. The default number of consecutive failures is 10, and the default retry interval is 60 seconds. Display the directory to which cephfs-mirror daemon is mapped: Syntax Example 1 instance_id is the RADOS instance-ID associated with a cephfs-mirror daemon. Example 1 stalled state means the CephFS mirroring is stalled. The second example shows the command output when no mirror daemons are running. Additional Resources See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for more information. 10.3. Viewing metrics for Ceph File System snapshot mirroring Viewing these metrics helps in monitoring the performance and the sync progress. Check Ceph File System snapshot mirror health and volume metrics by using the counter dump. Prerequistes A running IBM Storage Ceph cluster. A minimum of one deployment of a Ceph File System snapshot mirroring enabled. Root-level access to the node running the Ceph File System mirroring daemon. Procedure Get the name of the asok file. The asok file is available where the mirroring daemon is running and is located at /var/run/ceph/ within the cephadm shell. Check the mirroring metrics and synchronization status by running the following command on the node running the CephFS mirroring daemon. Syntax Example Metrics description: Labeled Perf Counters generate metrics which can be consumed by the OCP/ODF dashboard to provide monitoring of geo-replication in the OCP and ACM dashboard and elsewhere. This would generate the progress of cephfs_mirror syncing and provide monitoring capability. The exported metrics enable monitoring based on the following alerts. mirroring_peers The number of peers involved in mirroring. directory_count The total number of directories being synchronized. mirrored_filesystems The total number of file systems which are mirrored. mirror_enable_failures Enable mirroring failures. snaps_synced The total number of snapshots successfully synchronized. sync_bytes The total bytes being synchronized sync_failures The total number of failed snapshot synchronizations. snaps_deleted The total number of snapshots deleted. snaps_renamed The total number of snapshots renamed. avg_synced_time The average time taken by all snapshot synchronizations. last_synced_start The sync start time of the last synced snapshot. last_synced_end The sync end time of the last synced snapshot. last_synced_duration The time duration of the last synchronization. last_synced_bytes The total bytes being synchronized for the last synced snapshot. Additional Resources For details, see the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . For details, see the Red Hat Ceph Storage Installation Guide . For details, see the The Ceph File System Metadata Server section in the Red Hat Ceph Storage File System Guide . For details, see the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide . | [
"cephadm shell",
"ceph orch apply cephfs-mirror [\" NODE_NAME \"]",
"ceph orch apply cephfs-mirror \"node1.example.com\" Scheduled cephfs-mirror update",
"ceph orch apply cephfs-mirror --placement=\" PLACEMENT_SPECIFICATION \"",
"ceph orch apply cephfs-mirror --placement=\"3 host1 host2 host3\" Scheduled cephfs-mirror update",
"Error EINVAL: name component must include only a-z, 0-9, and -",
"ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps",
"ceph fs authorize cephfs client.mirror_remote / rwps [client.mirror_remote] key = AQCjZ5Jg739AAxAAxduIKoTZbiFJ0lgose8luQ==",
"ceph mgr module enable mirroring",
"ceph fs snapshot mirror enable FILE_SYSTEM_NAME",
"ceph fs snapshot mirror enable cephfs",
"ceph fs snapshot mirror disable FILE_SYSTEM_NAME",
"ceph fs snapshot mirror disable cephfs",
"ceph mgr module enable mirroring",
"ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME",
"ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site {\"token\": \"eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==\"}",
"ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN",
"ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==",
"ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME",
"ceph fs snapshot mirror peer_list cephfs {\"e5ecb883-097d-492d-b026-a585d1d7da79\": {\"client_name\": \"client.mirror_remote\", \"site_name\": \"remote-site\", \"fs_name\": \"cephfs\", \"mon_host\": \"[v2:10.0.211.54:3300/0,v1:10.0.211.54:6789/0] [v2:10.0.210.56:3300/0,v1:10.0.210.56:6789/0] [v2:10.0.210.65:3300/0,v1:10.0.210.65:6789/0]\"}}",
"ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID",
"ceph fs snapshot mirror peer_remove cephfs e5ecb883-097d-492d-b026-a585d1d7da79",
"ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH",
"ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1",
"ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH",
"ceph fs snapshot mirror remove cephfs /home/user1",
"cephadm shell",
"ceph fs snapshot mirror daemon status",
"ceph fs snapshot mirror daemon status [ { \"daemon_id\": 15594, \"filesystems\": [ { \"filesystem_id\": 1, \"name\": \"cephfs\", \"directory_count\": 1, \"peers\": [ { \"uuid\": \"e5ecb883-097d-492d-b026-a585d1d7da79\", \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" }, \"stats\": { \"failure_count\": 1, \"recovery_count\": 0 } } ] } ] } ]",
"ceph --admin-daemon PATH_TO_THE_ASOK_FILE help",
"ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok help { \"fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e\": \"get peer mirror status\", \"fs mirror status cephfs@11\": \"get filesystem mirror status\", }",
"ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME @_FILE_SYSTEM_ID",
"ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok fs mirror status cephfs@11 { \"rados_inst\": \"192.168.0.5:0/1476644347\", \"peers\": { \"1011435c-9e30-4db6-b720-5bf482006e0e\": { 1 \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" } } }, \"snap_dirs\": { \"dir_count\": 1 } }",
"ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME @ FILE_SYSTEM_ID PEER_UUID",
"ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e { \"/home/user1\": { \"state\": \"idle\", 1 \"last_synced_snap\": { \"id\": 120, \"name\": \"snap1\", \"sync_duration\": 0.079997898999999997, \"sync_time_stamp\": \"274900.558797s\" }, \"snaps_synced\": 2, 2 \"snaps_deleted\": 0, 3 \"snaps_renamed\": 0 } }",
"ceph fs snapshot mirror dirmap FILE_SYSTEM_NAME PATH",
"ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"instance_id\": \"25184\", 1 \"last_shuffled\": 1661162007.012663, \"state\": \"mapped\" }",
"ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"reason\": \"no mirror daemons running\", \"state\": \"stalled\" 1 }",
"ceph --admin-daemon ASOK_FILE_NAME counter dump",
"ceph --admin-daemon ceph-client.cephfs-mirror.ceph1-hk-n-0mfqao-node7.pnbrlu.2.93909288073464.asok counter dump [ { \"key\": \"cephfs_mirror\", \"value\": [ { \"labels\": {}, \"counters\": { \"mirrored_filesystems\": 1, \"mirror_enable_failures\": 0 } } ] }, { \"key\": \"cephfs_mirror_mirrored_filesystems\", \"value\": [ { \"labels\": { \"filesystem\": \"cephfs\" }, \"counters\": { \"mirroring_peers\": 1, \"directory_count\": 1 } } ] }, { \"key\": \"cephfs_mirror_peers\", \"value\": [ { \"labels\": { \"peer_cluster_filesystem\": \"cephfs\", \"peer_cluster_name\": \"remote_site\", \"source_filesystem\": \"cephfs\", \"source_fscid\": \"1\" }, \"counters\": { \"snaps_synced\": 1, \"snaps_deleted\": 0, \"snaps_renamed\": 0, \"sync_failures\": 0, \"avg_sync_time\": { \"avgcount\": 1, \"sum\": 4.216959457, \"avgtime\": 4.216959457 }, \"sync_bytes\": 132 } } ] } ]"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/file_system_guide/ceph-file-system-mirrors |
Chapter 9. Uninstalling a cluster on OpenStack | Chapter 9. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 9.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note If you deployed your cluster to the AWS C2S Secret Region, the installation program does not support destroying the cluster; you must manually remove the cluster resources. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_openstack/uninstalling-cluster-openstack |
Chapter 6. Known issues | Chapter 6. Known issues This section lists the known issues for AMQ Streams 1.6. Issue Number ENTMQST-2030 - kafka-ack reports javax.management.InstanceAlreadyExistsException: kafka.admin.client:type=app-info,id=<client_id>with client.id set Description If the bin/kafka-acls.sh utility is used in combination with the --bootstrap-server parameter to add or remove an ACL, the operation is successful but a warning is generated. The reason for the warning is that a second AdminClient instance is created. This will be fixed in a future release of Kafka. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_rhel/known-issues-str |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.13/pr01 |
2.3.3. Disabling ACPI Completely in the grub.conf File | 2.3.3. Disabling ACPI Completely in the grub.conf File The preferred method of disabling ACPI Soft-Off is with chkconfig management ( Section 2.3.1, "Disabling ACPI Soft-Off with chkconfig Management" ). If the preferred method is not effective for your cluster, you can disable ACPI Soft-Off with the BIOS power management ( Section 2.3.2, "Disabling ACPI Soft-Off with the BIOS" ). If neither of those methods is effective for your cluster, you can disable ACPI completely by appending acpi=off to the kernel boot command line in the grub.conf file. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. You can disable ACPI completely by editing the grub.conf file of each cluster node as follows: Open /boot/grub/grub.conf with a text editor. Append acpi=off to the kernel boot command line in /boot/grub/grub.conf (refer to Example 2.12, "Kernel Boot Command Line with acpi=off Appended to It" ). Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga . Example 2.12. Kernel Boot Command Line with acpi=off Appended to It In this example, acpi=off has been appended to the kernel boot command line - the line starting with "kernel /vmlinuz-2.6.18-36.el5". | [
"grub.conf generated by anaconda # Note that you do not have to rerun grub after making changes to this file NOTICE: You have a /boot partition. This means that all kernel and initrd paths are relative to /boot/, eg. root (hd0,0) kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 initrd /initrd-version.img #boot=/dev/hda default=0 timeout=5 serial --unit=0 --speed=115200 terminal --timeout=5 serial console title Red Hat Enterprise Linux Server (2.6.18-36.el5) root (hd0,0) kernel /vmlinuz-2.6.18-36.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0,115200n8 acpi=off initrd /initrd-2.6.18-36.el5.img"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-apci-disable-boot-ca |
Chapter 19. Red Hat Enterprise Linux 7.5 for ARM | Chapter 19. Red Hat Enterprise Linux 7.5 for ARM Red Hat Enterprise Linux 7.5 for ARM introduces Red Hat Enterprise Linux 7.5 user space with an updated kernel, which is based on version 4.14 and is provided by the kernel-alt packages. The offering is distributed with other updated packages but most of the packages are standard Red Hat Enterprise Linux 7 Server RPMs. Installation ISO images are available on the Customer Portal Downloads page . For information about Red Hat Enterprise Linux 7.5 user space, see the Red Hat Enterprise Linux 7 documentation . For information regarding the version, refer to Red Hat Enterprise Linux 7.4 for ARM - Release Notes . The following packages are provided as Development Preview in this release: libvirt (Optional channel) qemu-kvm-ma (Optional channel) Note KVM virtualization is a Development Preview on the 64-bit ARM architecture, and thus is not supported by Red Hat. For more information, see the Virtualization Deployment and Administration Guide . Customers may contact Red Hat and describe their use case, which will be taken into consideration for a future release of Red Hat Enterprise Linux. 19.1. New Features and Updates Core Kernel This update introduces the qrwlock queue write lock for 64-bit ARM systems. The implementation of this mechanism improves performance and prevents lock starvation by ensuring fair handling of multiple CPUs competing for the global task lock. This change also resolves a known issue, which was present in earlier releases and which caused soft lockups under heavy load. Note that any kernel modules built for versions of Red Hat Enterprise Linux 7 for ARM (against the kernel-alt packages) must be rebuilt against the updated kernel. (BZ#1507568) Security USBGuard is now fully supported on 64-bit ARM systems The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. Using USBGuard on 64-bit ARM systems, previously available as a Technology Preview, is now fully supported. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/chap-Red_Hat_Enterprise_Linux-7.5_Release_Notes-RHEL_for_ARM |
3.4. Setting Up Multipathing in the initramfs File System | 3.4. Setting Up Multipathing in the initramfs File System You can set up multipathing in the initramfs file system. After configuring multipath, you can rebuild the initramfs file system with the multipath configuration files by executing the dracut command with the following options: If you run multipath from the initramfs file system and you make any changes to the multipath configuration files, you must rebuild the initramfs file system for the changes to take effect. | [
"dracut --force --add multipath"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/mp_initramfs |
8.3.2. Installing from a Hard Drive | 8.3.2. Installing from a Hard Drive The Select Partition screen applies only if you are installing from a disk partition (that is, you selected Hard Drive in the Installation Method dialog). This dialog allows you to name the disk partition and directory from which you are installing Red Hat Enterprise Linux. If you used the repo=hd boot option, you already specified a partition. Figure 8.5. Selecting Partition Dialog for Hard Drive Installation Select the partition containing the ISO files from the list of available partitions. Internal IDE, SATA, SCSI, and USB drive device names begin with /dev/sd . Each individual drive has its own letter, for example /dev/sda . Each partition on a drive is numbered, for example /dev/sda1 . Also specify the Directory holding images . Enter the full directory path from the drive that contains the ISO image files. The following table shows some examples of how to enter this information: Table 8.1. Location of ISO images for different partition types Partition type Volume Original path to files Directory to use VFAT D:\ D:\Downloads\RHEL6.9 /Downloads/RHEL6.9 ext2, ext3, ext4 /home /home/user1/RHEL6.9 /user1/RHEL6.9 If the ISO images are in the root (top-level) directory of a partition, enter a / . If the ISO images are located in a subdirectory of a mounted partition, enter the name of the directory holding the ISO images within that partition. For example, if the partition on which the ISO images is normally mounted as /home/ , and the images are in /home/new/ , you would enter /new/ . Important An entry without a leading slash may cause the installation to fail. Select OK to continue. Proceed with Chapter 9, Installing Using Anaconda . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-begininstall-hd-x86 |
12.3. Booleans | 12.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: allow_postfix_local_write_mail_spool Having this Boolean enables Postfix to write to the local mail spool on the system. Postfix requires this Boolean to be enabled for normal operation when local spools are used. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, run the following command as root: | [
"~]# semanage boolean -l | grep service_name"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-postfix-booleans |
Chapter 12. Kernel | Chapter 12. Kernel Kernel version in RHEL 7.5 Red Hat Enterprise Linux 7.5 is distributed with the kernel version 3.10.0-862. (BZ#1801759) Memory Protection Keys are now supported in later Intel processors Memory Protection Keys provide a mechanism for enforcing page-based protections, but without requiring modifications of the page tables when an application changes protection domains. To determine if your processor supports Memory Protection Keys, check for the pku flag in the /proc/cpuinfo file. Further documentation including programming examples can be found in the /usr/share/doc/kernel-doc-*/Documentation/x86/protection-keys.txt file, which is provided by the kernel-doc package. (BZ#1272615) EDAC support added for Pondicherry 2 memory controllers Error Detection and Correction support has been added for Pondicherry 2 memory controllers used on machines based on the Intel Atom C3000-series processors. (BZ#1273769) MBA is now supported Memory Bandwidth Allocation (MBA) is an extension of the existing Cache QoS Enforcement (CQE) feature found in Broadwell servers. MBA is a feature of the Intel Resource Director Technology (RDT) that provides control over memory bandwidth for applications. With this update, the MBA support is added. (BZ#1379551) Swap optimizations enable fast block devices to be used as secondary memory Previously, the swap subsystem was not performance-critical because the performance of rotating disks, especially in terms of latency, was orders of magnitude worse than the rest of the memory management subsystem. With the advent of fast SSD devices, the overhead of the swap subsystem has become significant. This update brings a series of performance optimizations that reduce this overhead. (BZ#1400689) HID Wacom rebased to version 4.12 The HID Wacom kernel module packages have been upgraded to upstream version 4.12, which provides a number of bug fixes and enhancements over the version: The hid_wacom power supply code has been updated, fixing previously existing problems. Support has been added for the Bluetooth-based Intuos 2 Pro pen tablet. Bugs affecting the Intuos 2 Pro pen tablet and the Bamboo slate have been fixed. (BZ#1475409) New livepatch functionality improves the latency and success rate of the kpatch-patch packages With this update, the kpatch kernel live patching infrastructure has been upgraded to use the new upstream livepatch functionality for patching the kernel. This functionality improves the scheduling latency and success rate of the kpatch-patch hotfix packages. (BZ#1430637) Persistent Kernel Module Upgrade (PKMU) supported The kmod packages provide various programs for automatic loading, unloading, and management of kernel modules. Previously, kmod searched for the modules only in the /lib/modules/<kernel version> directory. Consequently, users needed to perform additional actions, for example, run the /usr/sbin/weak-modules script to install symlinks, to make the modules loadable. With this update, kmod have been modified to search for the modules anywhere in the file system. As a result, users can now install new modules to a separate directory, configure the kmod tools to look for modules there, and the modules will be available automatically for the new kernel. Users can also specify several directories for a kernel, or different directories for different kernels. The kernel version is specified with a regular expression. (BZ#1361857) The Linux kernel now supports encrypted SMB 3 connections Prior to introducing this feature, the kernel only supported unencrypted connections when using the Server Message Block (SMB) protocol. This update adds encryption support for SMB 3.0 and later protocol versions. As a result, users can mount SMB shares using encryption, if the server provides or requires this feature. To mount a share using the encrypted SMB protocol, pass the seal mount option together with the vers mount option set to 3.0 or later to the mount command. For further details and an example, see the seal parameter description in https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/mounting_an_smb_share#tab.frequently_used_mount_options . (BZ#1429710) SME enabled on AMD Naples platforms With this update, AMD Secure Memory Encryption (SME) is provided by systems based on AMD Naples platforms. The Advanced Encryption Standard (AES) engine has the ability to encrypt and decrypt dynamic random access memory (DRAM). SME , provided by the AES engine, is intended to protect machines against hardware-probing attacks. To activate SME , boot the system with the kernel parameter mem_encrypt=on . (BZ#1361287) Support for the ie31200_edac driver This enhancement adds support for the ie31200_edac driver to the consumer version of Skylake and Kabi Lake CPU families. (BZ#1482253) EDAC now supports GHES This enhancement adds Error Detection and Correction (EDAC) support for using the Generic Hardware Error Source (GHES) provided by BIOS. GHES is now used as a source for memory corrected and uncorrected errors instead of a hardware specific driver. (BZ#1451916) CUIR enhanced scope detection is now fully supported Support for Control Unit Initiated Reconfiguration (CUIR) enables the Direct Access Storage Device (DASD) device driver to automatically take paths to DASDs offline for concurrent services. If other paths to the DASD are available, the DASD stays operational. CUIR informs the DASD device driver when the paths are available again, and the device driver attempts to vary them back online. In addition to the support for Linux instances running in Logical Partitioning (LPAR) mode, support for Linux instances on IBM z/VM systems has been added. (BZ#1494476) kdump allows a vmcore collection without the root file system being mounted In Red Hat Enterprise Linux 7.4, kdump required the root file system to be mounted although this is not always necessary for the collection of a vmcore image file. Consequently, kdump failed to collect a vmcore file if the root device could not be mounted when the dump target was not on the root file system, but, for example, on a usb or on the network. With this enhancement, if the root device is not required for dump, it is not mounted, and a vmcore file can be collected. (BZ# 1431974 , BZ#1460652) KASLR fully supported and enabled by default Kernel address space layout randomization (KASLR), which was previously available as a Technology Preview, is fully supported in Red Hat Enterprise Linux 7.5 on the AMD64 and Intel 64 architectures. KASLR is a kernel feature that contains two parts, kernel text KASLR and mm KASLR. These two parts work together to enhance the security of the Linux kernel. The physical address and virtual address of kernel text itself are randomized to a different position separately. The physical address of the kernel can be anywhere under 64TB, while the virtual address of the kernel is restricted between [0xffffffff80000000, 0xffffffffc0000000], the 1GB space. The starting address of three mm sections (the direct mapping, vmalloc , and vmemmap section) is randomized in a specific area. Previously, starting addresses of these sections were fixed values. KASLR can thus prevent inserting and redirecting the execution of the kernel to a malicious code if this code relies on knowing where symbols of interest are located in the kernel address space. KASLR code is now compiled in the Linux kernel, and it is enabled by default. If you want to disable it explicitly, add the nokaslr kernel option to the kernel command line. (BZ#1491226) Intel(R) Omni-Path Architecture (OPA) Host Software Intel(R) Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 7.5. Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on installing Intel(R) Omni-Path Architecture documentation, see https://www.intel.com/content/dam/support/us/en/documents/network-and-i-o/fabric-products/Intel_OP_Software_RHEL_7_5_RN_J98644.pdf . (BZ# 1543995 ) noreplace-paravirt has been removed from the kernel command line parameters The noreplace-paravirt kernel command line parameter has been removed, because the parameter is no longer compatible with the patches to mitigate the Spectre and Meltdown vulnerabilities. Booting AMD64 and Intel 64 systems with noreplace-paravirt in kernel command line will cause repeated reboots of the operating system. (BZ#1538911) The new EFI memmap implementation is now available on SGI UV2+ systems Prior to this update, the Extensible Firmware Interface (EFI) stable runtime services mapping across kexec reboot ( memmap ) implementation was not available on Silicon Graphics International (SGI) UV2 and later systems. This update adds support for EFI memmap . Additionally, this update also enables use of Secure Boot with the kdump kernel. (BZ#1102454) Mounting pNFS shares with flexible file layout is now fully supported Flexible file layout on pNFS clients was first introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. With Red Hat Enterprise Linux 7.5, it is now fully supported. pNFS flexible file layout enables advanced features such as non-disruptive file mobility and client-side mirroring, which provides enhanced usability in areas such as databases, big data, and virtualization. See https://datatracker.ietf.org/doc/draft-ietf-nfsv4-flex-files/ for detailed information about pNFS flexible file layout. (BZ#1349668) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_kernel |
Installation overview | Installation overview OpenShift Container Platform 4.17 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installation_overview/index |
Chapter 4. Network Observability Operator in OpenShift Container Platform | Chapter 4. Network Observability Operator in OpenShift Container Platform Network Observability is an OpenShift operator that deploys a monitoring pipeline to collect and enrich network traffic flows that are produced by the Network Observability eBPF agent. 4.1. Viewing statuses The Network Observability Operator provides the Flow Collector API. When a Flow Collector resource is created, it deploys pods and services to create and store network flows in the Loki log store, as well as to display dashboards, metrics, and flows in the OpenShift Container Platform web console. Procedure Run the following command to view the state of FlowCollector : USD oc get flowcollector/cluster Example output Check the status of pods running in the netobserv namespace by entering the following command: USD oc get pods -n netobserv Example output flowlogs-pipeline pods collect flows, enriches the collected flows, then send flows to the Loki storage. netobserv-plugin pods create a visualization plugin for the OpenShift Container Platform Console. Check the status of pods running in the namespace netobserv-privileged by entering the following command: USD oc get pods -n netobserv-privileged Example output netobserv-ebpf-agent pods monitor network interfaces of the nodes to get flows and send them to flowlogs-pipeline pods. If you are using the Loki Operator, check the status of pods running in the openshift-operators-redhat namespace by entering the following command: USD oc get pods -n openshift-operators-redhat Example output 4.2. Network Observablity Operator architecture The Network Observability Operator provides the FlowCollector API, which is instantiated at installation and configured to reconcile the eBPF agent , the flowlogs-pipeline , and the netobserv-plugin components. Only a single FlowCollector per cluster is supported. The eBPF agent runs on each cluster node with some privileges to collect network flows. The flowlogs-pipeline receives the network flows data and enriches the data with Kubernetes identifiers. If you choose to use Loki, the flowlogs-pipeline sends flow logs data to Loki for storing and indexing. The netobserv-plugin , which is a dynamic OpenShift Container Platform web console plugin, queries Loki to fetch network flows data. Cluster-admins can view the data in the web console. If you do not use Loki, you can generate metrics with Prometheus. Those metrics and their related dashboards are accessible in the web console. For more information, see "Network Observability without Loki". If you are using the Kafka option, the eBPF agent sends the network flow data to Kafka, and the flowlogs-pipeline reads from the Kafka topic before sending to Loki, as shown in the following diagram. Additional resources Network Observability without Loki 4.3. Viewing Network Observability Operator status and configuration You can inspect the status and view the details of the FlowCollector using the oc describe command. Procedure Run the following command to view the status and configuration of the Network Observability Operator: USD oc describe flowcollector/cluster | [
"oc get flowcollector/cluster",
"NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready",
"oc get pods -n netobserv",
"NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m",
"oc get pods -n netobserv-privileged",
"NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h",
"oc describe flowcollector/cluster"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_observability/nw-network-observability-operator |
Part III. Reference material | Part III. Reference material | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/reference_material |
Chapter 7. Forwarding telemetry data | Chapter 7. Forwarding telemetry data You can use the OpenTelemetry Collector to forward your telemetry data. 7.1. Forwarding traces to a TempoStack instance To configure forwarding traces to a TempoStack instance, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources . Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. Procedure Create a service account for the OpenTelemetry Collector. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespaces resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR). Example OpenTelemetryCollector apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-simplest-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, "tempo-simplest-distributor:4317" in this example, which is already created. 2 The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the gRPC protocol. Tip You can deploy telemetrygen as a test: apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4 Additional resources OpenTelemetry Collector documentation Deployment examples on GitHub 7.2. Forwarding logs to a LokiStack instance You can deploy the OpenTelemetry Collector to forward logs to a LokiStack instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Loki Operator is installed. A supported LokiStack instance is deployed on the cluster. Procedure Create a service account for the OpenTelemetry Collector. Example ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging Create a cluster role that grants the Collector's service account the permissions to push logs to the LokiStack application tenant. Example ClusterRole object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: ["loki.grafana.com"] resourceNames: ["logs"] resources: ["application"] verbs: ["create"] - apiGroups: [""] resources: ["pods", "namespaces", "nodes"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets"] verbs: ["get", "list", "watch"] - apiGroups: ["extensions"] resources: ["replicasets"] verbs: ["get", "list", "watch"] Bind the cluster role to the service account. Example ClusterRoleBinding object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging Create an OpenTelemetryCollector custom resource (CR) object. Example OpenTelemetryCollector CR object apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes["level"], ConvertCase(severity_text, "lower")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug] 1 Provides the following resource attributes to be used by the web console: kubernetes.namespace_name , kubernetes.pod_name , kubernetes.container_name , and log_type . 2 Enables the BearerTokenAuth Extension that is required by the OTLP HTTP Exporter. 3 Enables the OTLP HTTP Exporter to export logs from the Collector. Tip You can deploy telemetrygen as a test: apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name="telemetrygen" restartPolicy: Never backoffLimit: 4 Additional resources Installing LokiStack log storage | [
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: {} otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-simplest-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:latest args: - traces - --otlp-endpoint=otel-collector:4317 - --otlp-insecure - --duration=30s - --workers=1 restartPolicy: Never backoffLimit: 4",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-logs-writer rules: - apiGroups: [\"loki.grafana.com\"] resourceNames: [\"logs\"] resources: [\"application\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"pods\", \"namespaces\", \"nodes\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"extensions\"] resources: [\"replicasets\"] verbs: [\"get\", \"list\", \"watch\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-logs-writer roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-collector-logs-writer subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: openshift-logging",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: openshift-logging spec: serviceAccount: otel-collector-deployment config: extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" receivers: otlp: protocols: grpc: {} http: {} processors: k8sattributes: {} resource: attributes: 1 - key: kubernetes.namespace_name from_attribute: k8s.namespace.name action: upsert - key: kubernetes.pod_name from_attribute: k8s.pod.name action: upsert - key: kubernetes.container_name from_attribute: k8s.container.name action: upsert - key: log_type value: application action: upsert transform: log_statements: - context: log statements: - set(attributes[\"level\"], ConvertCase(severity_text, \"lower\")) exporters: otlphttp: endpoint: https://logging-loki-gateway-http.openshift-logging.svc.cluster.local:8080/api/logs/v1/application/otlp encoding: json tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth debug: verbosity: detailed service: extensions: [bearertokenauth] 2 pipelines: logs: receivers: [otlp] processors: [k8sattributes, transform, resource] exporters: [otlphttp] 3 logs/test: receivers: [otlp] processors: [] exporters: [debug]",
"apiVersion: batch/v1 kind: Job metadata: name: telemetrygen spec: template: spec: containers: - name: telemetrygen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.106.1 args: - logs - --otlp-endpoint=otel-collector.openshift-logging.svc.cluster.local:4317 - --otlp-insecure - --duration=180s - --workers=1 - --logs=10 - --otlp-attributes=k8s.container.name=\"telemetrygen\" restartPolicy: Never backoffLimit: 4"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/otel-forwarding-telemetry-data |
Chapter 4. SAP Automation and Performance | Chapter 4. SAP Automation and Performance 4.1. RHEL System Roles for SAP The Red Hat Enterprise Linux System Roles for SAP, provided exclusively with the Red Hat Enterprise Linux for SAP Solutions subscription, removes human error from complex and repetitive SAP configuration tasks, such as configuration of a Red Hat Enterprise Linux system for installation of SAP HANA or SAP NetWeaver software. Customers can use RHEL system roles for SAP to enforce SAP best practices for configuration and setup of both SAP NetWeaver and SAP HANA deployments based on RHEL. Additional resources For more information, see Red Hat Enterprise Linux System Roles for SAP . 4.2. tuned To ensure that RHEL is configured appropriately to best support SAP workloads, the RHEL for SAP Solutions provides tuned profiles "sap" and "sap-hana", which contain many of the SAP best practices and some additional configure settings. Additional resources For more information, see Getting started on your SAP HANA journey with RHEL 8 for SAP Solutions and SAP Note 2777782 . 4.3. Compatibility libraries RHEL for SAP Solutions provides additional GCC runtime compatibility libraries required by newer SAP NetWeaver and SAP HANA releases. These GCC runtime compatibility libraries can be installed independently of the standard GCC runtime libraries provided by RHEL. 4.4. Smart Management The RHEL subscription includes the Smart Management Add-on to provide easy management and updates of Red Hat Enterprise Linux systems by using Red Hat Satellite Server. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/overview_of_red_hat_enterprise_linux_for_sap_solutions_subscription/assembly_sap-automation-and-performance_overview-of-rhel-for-sap-solutions-subscription-combined |
Chapter 18. kubernetes | Chapter 18. kubernetes The namespace for Kubernetes-specific metadata Data type group 18.1. kubernetes.pod_name The name of the pod Data type keyword 18.2. kubernetes.pod_id The Kubernetes ID of the pod Data type keyword 18.3. kubernetes.namespace_name The name of the namespace in Kubernetes Data type keyword 18.4. kubernetes.namespace_id The ID of the namespace in Kubernetes Data type keyword 18.5. kubernetes.host The Kubernetes node name Data type keyword 18.6. kubernetes.container_name The name of the container in Kubernetes Data type keyword 18.7. kubernetes.annotations Annotations associated with the Kubernetes object Data type group 18.8. kubernetes.labels Labels present on the original Kubernetes Pod Data type group 18.9. kubernetes.event The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows type Event in Event v1 core . Data type group 18.9.1. kubernetes.event.verb The type of event, ADDED , MODIFIED , or DELETED Data type keyword Example value ADDED 18.9.2. kubernetes.event.metadata Information related to the location and time of the event creation Data type group 18.9.2.1. kubernetes.event.metadata.name The name of the object that triggered the event creation Data type keyword Example value java-mainclass-1.14d888a4cfc24890 18.9.2.2. kubernetes.event.metadata.namespace The name of the namespace where the event originally occurred. Note that it differs from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 18.9.2.3. kubernetes.event.metadata.selfLink A link to the event Data type keyword Example value /api/v1/namespaces/javaj/events/java-mainclass-1.14d888a4cfc24890 18.9.2.4. kubernetes.event.metadata.uid The unique ID of the event Data type keyword Example value d828ac69-7b58-11e7-9cf5-5254002f560c 18.9.2.5. kubernetes.event.metadata.resourceVersion A string that identifies the server's internal version of the event. Clients can use this string to determine when objects have changed. Data type integer Example value 311987 18.9.3. kubernetes.event.involvedObject The object that the event is about. Data type group 18.9.3.1. kubernetes.event.involvedObject.kind The type of object Data type keyword Example value ReplicationController 18.9.3.2. kubernetes.event.involvedObject.namespace The namespace name of the involved object. Note that it may differ from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 18.9.3.3. kubernetes.event.involvedObject.name The name of the object that triggered the event Data type keyword Example value java-mainclass-1 18.9.3.4. kubernetes.event.involvedObject.uid The unique ID of the object Data type keyword Example value e6bff941-76a8-11e7-8193-5254002f560c 18.9.3.5. kubernetes.event.involvedObject.apiVersion The version of kubernetes master API Data type keyword Example value v1 18.9.3.6. kubernetes.event.involvedObject.resourceVersion A string that identifies the server's internal version of the pod that triggered the event. Clients can use this string to determine when objects have changed. Data type keyword Example value 308882 18.9.4. kubernetes.event.reason A short machine-understandable string that gives the reason for generating this event Data type keyword Example value SuccessfulCreate 18.9.5. kubernetes.event.source_component The component that reported this event Data type keyword Example value replication-controller 18.9.6. kubernetes.event.firstTimestamp The time at which the event was first recorded Data type date Example value 2017-08-07 10:11:57.000000000 Z 18.9.7. kubernetes.event.count The number of times this event has occurred Data type integer Example value 1 18.9.8. kubernetes.event.type The type of event, Normal or Warning . New types could be added in the future. Data type keyword Example value Normal | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields |
B.2. Constraints Reference | B.2. Constraints Reference Constraints are used to define the allowable contents of a certificate and the values associated with that content. This section lists the predefined constraints with complete definitions of each. B.2.1. Basic Constraints Extension Constraint The Basic Constraints extension constraint checks if the basic constraint in the certificate request satisfies the criteria set in this constraint. Table B.25. Basic Constraints Extension Constraint Configuration Parameters Parameter Description basicConstraintsCritical Specifies whether the extension can be marked critical or noncritical. Select true to mark this extension critical; select false to prevent this extension from being marked critical. Selecting a hyphen - , implies no criticality preference. basicConstraintsIsCA Specifies whether the certificate subject is a CA. Select true to require a value of true for this parameter (is a CA); select false to disallow a value of true for this parameter; select a hyphen, - , to indicate no constraints are placed for this parameter. basicConstraintsMinPathLen Specifies the minimum allowable path length, the maximum number of CA certificates that may be chained below (subordinate to) the subordinate CA certificate being issued. The path length affects the number of CA certificates used during certificate validation. The chain starts with the end-entity certificate being validated and moves up. This parameter has no effect if the extension is set in end-entity certificates. The permissible values are 0 or n . The value must be less than the path length specified in the Basic Constraints extension of the CA signing certificate. 0 specifies that no subordinate CA certificates are allowed below the subordinate CA certificate being issued; only an end-entity certificate may follow in the path. n must be an integer greater than zero. This is the minimun number of subordinate CA certificates allowed below the subordinate CA certificate being used. basicConstraintsMaxPathLen Specifies the maximum allowable path length, the maximum number of CA certificates that may be chained below (subordinate to) the subordinate CA certificate being issued. The path length affects the number of CA certificates used during certificate validation. The chain starts with the end-entity certificate being validated and moves up. This parameter has no effect if the extension is set in end-entity certificates. The permissible values are 0 or n . The value must be greater than the path length specified in the Basic Constraints extension of the CA signing certificate. 0 specifies that no subordinate CA certificates are allowed below the subordinate CA certificate being issued; only an end-entity certificate may follow in the path. n must be an integer greater than zero. This is the maximum number of subordinate CA certificates allowed below the subordinate CA certificate being used. If the field is blank, the path length defaults to a value determined by the path length set on the Basic Constraints extension in the issuer's certificate. If the issuer's path length is unlimited, the path length in the subordinate CA certificate is also unlimited. If the issuer's path length is an integer greater than zero, the path length in the subordinate CA certificate is set to a value one less than the issuer's path length; for example, if the issuer's path length is 4, the path length in the subordinate CA certificate is set to 3. B.2.2. CA Validity Constraint The CA Validity constraint checks if the validity period in the certificate template is within the CA's validity period. If the validity period of the certificate is out outside the CA certificate's validity period, the constraint is rejected. B.2.3. Extended Key Usage Extension Constraint The Extended Key Usage extension constraint checks if the Extended Key Usage extension on the certificate satisfies the criteria set in this constraint. Table B.26. Extended Key Usage Extension Constraint Configuration Parameters Parameter Description exKeyUsageCritical When set to true , the extension can be marked as critical. When set to false , the extension can be marked noncritical. exKeyUsageOIDs Specifies the allowable OIDs that identifies a key-usage purpose. Multiple OIDs can be added in a comma-separated list. B.2.4. Extension Constraint This constraint implements the general extension constraint. It checks if the extension is present. Table B.27. Extension Constraint Parameter Description extCritical Specifies whether the extension can be marked critical or noncritical. Select true to mark the extension critical; select false to mark it noncritical. Select - to enforce no preference. extOID The OID of an extension that must be present in the cert to pass the constraint. B.2.5. Key Constraint This constraint checks the size of the key for RSA keys, and the name of the elliptic curve for EC keys. When used with RSA keys the KeyParameters parameter contains a comma-separated list of legal key sizes, and with EC Keys the KeyParameters parameter contains a comma-separated list of available ECC curves. Table B.28. Key Constraint Configuration Parameters Parameter Description keyType Gives a key type; this is set to - by default and uses an RSA key system. The choices are rsa and ec. If the key type is specified and not identified by the system, the constraint will be rejected. KeyParameters Defines the specific key parameters. The parameters which are set for the key differe, depending on the value of the keyType parameter (meaning, depending on the key type). With RSA keys, the KeyParameters parameter contains a comma-separated list of legal key sizes. With ECC keys, the KeyParameters parameter contains a comma-separated list of available ECC curves. B.2.6. Key Usage Extension Constraint The Key Usage extension constraint checks if the key usage constraint in the certificate request satisfies the criteria set in this constraint. Table B.29. Key Usage Extension Constraint Configuration Parameters Parameter Description keyUsageCritical Select true to mark this extension critical; select false to mark it noncritical. Select - for no preference. keyUsageDigitalSignature Specifies whether to sign SSL client certificates and S/MIME signing certificates. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. kleyUsageNonRepudiation Specifies whether to set S/MIME signing certificates. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. Warning Using this bit is controversial. Carefully consider the legal consequences of its use before setting it for any certificate. keyEncipherment Specifies whether to set the extension for SSL server certificates and S/MIME encryption certificates. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageDataEncipherment Specifies whether to set the extension when the subject's public key is used to encrypt user data, instead of key material. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageKeyAgreement Specifies whether to set the extension whenever the subject's public key is used for key agreement. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageCertsign Specifies whether the extension applies for all CA signing certificates. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageCRLSign Specifies whether to set the extension for CA signing certificates that are used to sign CRLs. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageEncipherOnly Specifies whether to set the extension if the public key is to be used only for encrypting data. If this bit is set, keyUsageKeyAgreement should also be set. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageDecipherOnly Specifies whether to set the extension if the public key is to be used only for deciphering data. If this bit is set, keyUsageKeyAgreement should also be set. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. B.2.7. Netscape Certificate Type Extension Constraint Warning This constraint is obsolete. Instead of using the Netscape Certificate Type extension constraint, use the Key Usage extension or Extended Key Usage extension. The Netscape Certificate Type extension constraint checks if the Netscape Certificate Type extension in the certificate request satisfies the criteria set in this constraint. B.2.8. No Constraint This constraint implements no constraint. When chosen along with a default, there are not constraints placed on that default. B.2.9. Renewal Grace Period Constraint The Renewal Grace Period Constraint sets rules on when a user can renew a certificate based on its expiration date. For example, users cannot renew a certificate until a certain time before it expires or if it goes past a certain time after its expiration date. One important thing to remember when using this constraint is that this constraint is set on the original enrollment profile , not the renewal profile. The rules for the renewal grace period are part of the original certificate and are carried over and applied for any subsequent renewals. This constraint is only available with the No Default extension. Table B.30. Renewal Grace Period Constraint Configuration Parameters Parameter Description renewal.graceAfter Sets the period, in days, after the certificate expires that it can be submitted for renewal. If the certificate has been expired longer that that time, then the renewal request is rejected. If no value is given, there is no limit. renewal.graceBefore Sets the period, in days, before the certificate expires that it can be submitted for renewal. If the certificate is not that close to its expiration date, then the renewal request is rejected. If no value is given, there is no limit. B.2.10. Signing Algorithm Constraint The Signing Algorithm constraint checks if the signing algorithm in the certificate request satisfies the criteria set in this constraint. Table B.31. Signing Algorithms Constraint Configuration Parameters Parameter Description signingAlgsAllowed Sets the signing algorithms that can be specified to sign the certificate. The algorithms can be any or all of the following: MD2withRSA MD5withRSA SHA256withRSA SHA512withRSA SHA256withEC SHA384withEC SHA512withEC B.2.11. Subject Name Constraint The Subject Name constraint checks if the subject name in the certificate request satisfies the criteria. Table B.32. Subject Name Constraint Configuration Parameters Parameter Description Pattern Specifies a regular expression or other string to build the subject DN. Subject Names and Regular Expressions The regular expression for the Subject Name Constraint is matched by the Java facility for matching regular expressions. The format for these regular expressions are listed in https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html . This allows wildcards such as asterisks ( * ) to search for any number of the characters and periods ( . ) to search for any type character. For example, if the pattern of the subject name constraint is set to uid=.* , the certificate profile framework checks if the subject name in the certificate request matches the pattern. A subject name like uid=user, o=Example, c=US satisfies the pattern uid=.* . The subject name cn=user, o=example,c=US does not satisfy the pattern. uid=.* means the subject name must begin with the uid attribute; the period-asterisk ( .* ) wildcards allow any type and number of characters to follow uid . It is possible to require internal patterns, such as .*ou=Engineering.* , which requires the ou=Engineering attribute with any kind of string before and after it. This matches cn=jdoe,ou=internal,ou=west coast,ou=engineering,o="Example Corp",st=NC as well as uid=bjensen,ou=engineering,dc=example,dc=com . Lastly, it is also possible to allow requests that are either one string or another by setting a pipe sign ( | ) between the options. For example, to permit subject names that contain either ou=engineering,ou=people or ou=engineering,o="Example Corp" , the pattern is .*ou=engineering,ou=people.* | .*ou=engineering,o="Example Corp".* . Note For constructing a pattern which uses a special character, such as a period ( . ), escape the character with a back slash ( \ ). For example, to search for the string o="Example Inc." , set the pattern to o="Example Inc\." . Subject Names and the UID or CN in the Certificate Request The pattern that is used to build the subject DN can also be based on the CN or UID of the person requesting the certificate. The Subject Name Constraint sets the patter of the CN (or UID) to recognize in the DN of the certificate request, and then the Subject Name Default builds on that CN to create the subject DN of the certificate, using a predefined directory tree. For example, to use the CN of the certificate request: B.2.12. Unique Key Constraint This constraint checks that the public key is unique. Table B.33. Unique Key Constraints Parameters Parameter Description allowSameKeyRenewal A request is considered a renewal and is accepted if this parameter is set to true , if a public key is not unique, and if the subject DN matches an existing certificate. However, if the public key is a duplicate and does not match an existing Subject DN, the request is rejected. When the parameter is set to false , a duplicate public key request will be rejected. B.2.13. Unique Subject Name Constraint The Unique Subject Name constraint restricts the server from issuing multiple certificates with the same subject names. When a certificate request is submitted, the server automatically checks the nickname against other issued certificate nicknames. This constraint can be applied to certificate enrollment and renewal through the end-entities' page. Certificates cannot have the same subject name unless one certificate is expired or revoked (and not on hold). So, active certificates cannot share a subject name, with one exception: if certificates have different key usage bits, then they can share the same subject name, because they have different uses. Table B.34. Unique Subject Name Constraint Configuration Parameters Parameter Description enableKeyUsageExtensionChecking Optional setting which allows certificates to have the same subject name as long as their key usage settings are different. This is either true or false . The default is true , which allows duplicate subject names. B.2.14. Validity Constraint The Validity constraint checks if the validity period in the certificate request satisfies the criteria. The parameters provided must be sensible values. For instance, a notBefore parameter that provides a time which has already passed will not be accepted, and a notAfter parameter that provides a time earlier than the notBefore time will not be accepted. Table B.35. Validity Constraint Configuration Parameters Parameter Description range The range of the validity period. This is an integer which sets the number of days. The difference (in days) between the notBefore time and the notAfter time must be less than the range value, or this constraint will be rejected. notBeforeCheck Verifies that the range is not within the grace period. When the NotBeforeCheck Boolean parameter is set to true, the system will check the notBefore time is not greater than the current time plus the notBeforeGracePeriod value. If the notBeforeTime is not between the current time and the notBeforeGracePeriod value, this constraint will be rejected. notBeforeGracePeriod The grace period (in seconds) after the notBefore time. If the notBeforeTime is not between the current time and the notBeforeGracePeriod value, this constraint will be rejected. This constraint is only checked if the notBeforeCheck parameter has been set to true. notAfterCheck Verfies whether the given time is not after the expiration period. When the notAfterCheck Boolean parameter is set to true, the system will check the notAfter time is not greater than the current time. If the current time exceeds the notAfter time, this constraint will be rejected. | [
"policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params. pattern=CN=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params. name=CN=USDrequest.req_subject_name.cnUSD,DC=example, DC=com"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Constraints_Reference |
Chapter 1. How to Upgrade | Chapter 1. How to Upgrade An in-place upgrade is the recommended and supported way to upgrade your system to the major version of RHEL. 1.1. How to upgrade from Red Hat Enterprise Linux 6 The Upgrading from RHEL 6 to RHEL 7 guide describes steps for an in-place upgrade from RHEL 6 to RHEL 7. The supported in-place upgrade path is from RHEL 6.10 to RHEL 7.9. If you are using SAP HANA, follow How do I upgrade from RHEL 6 to RHEL 7 with SAP HANA instead. Note that the upgrade path for RHEL with SAP HANA might differ. The process of upgrading from RHEL 6 to RHEL 7 consists of the following steps: Check that Red Hat supports the upgrade of your system. Prepare your system for the upgrade by installing required repositories and packages and by removing unsupported packages. Check your system for problems that might affect your upgrade using the Preupgrade Assistant. Upgrade your system by running the Red Hat Upgrade Tool. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/migration_planning_guide/chap-red_hat_enterprise_linux-migration_planning_guide-upgrading |
Chapter 5. Getting Started with Virtual Machine Manager | Chapter 5. Getting Started with Virtual Machine Manager The Virtual Machine Manager, also known as virt-manager , is a graphical tool for creating and managing guest virtual machines. This chapter provides a description of the Virtual Machine Manager and how to run it. Note You can only run the Virtual Machine Manager on a system that has a graphical interface. For more detailed information about using the Virtual Machine Manager, see the other Red Hat Enterprise Linux virtualization guides . 5.1. Running Virtual Machine Manager To run the Virtual Machine Manager, select it in the list of applications or use the following command: The Virtual Machine Manager opens to the main window. Figure 5.1. The Virtual Machine Manager Note If running virt-manager fails, ensure that the virt-manager package is installed. For information on installing the virt-manager package, see Installing the Virtualization Packages in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide. | [
"virt-manager"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/chap-Virtualization_Manager-Introduction |
Pipelines CLI (tkn) reference | Pipelines CLI (tkn) reference Red Hat OpenShift Pipelines 1.18 The tkn CLI reference for OpenShift Pipelines Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/pipelines_cli_tkn_reference/index |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.26/pr01 |
Proof of Concept - Deploying Red Hat Quay | Proof of Concept - Deploying Red Hat Quay Red Hat Quay 3.10 Deploying Red Hat Quay Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/proof_of_concept_-_deploying_red_hat_quay/index |
Chapter 10. Saving and restoring virtual machines | Chapter 10. Saving and restoring virtual machines To free up system resources, you can shut down a virtual machine (VM) running on that system. However, when you require the VM again, you must boot up the guest operating system (OS) and restart the applications, which may take a considerable amount of time. To reduce this downtime and enable the VM workload to start running sooner, you can use the save and restore feature to avoid the OS shutdown and boot sequence entirely. This section provides information about saving VMs, as well as about restoring them to the same state without a full VM boot-up. 10.1. How saving and restoring virtual machines works Saving a virtual machine (VM) saves its memory and device state to the host's disk, and immediately stops the VM process. You can save a VM that is either in a running or paused state, and upon restoring, the VM will return to that state. This process frees up RAM and CPU resources on the host system in exchange for disk space, which may improve the host system performance. When the VM is restored, because the guest OS does not need to be booted, the long boot-up period is avoided as well. To save a VM, you can use the command line (CLI). For instructions, see Saving virtual machines by using the command line . To restore a VM you can use the CLI or the web console GUI . You can also save and restore the state of a VM by using snapshots. For more information, see Saving and restoring virtual machine state by using snapshots . 10.2. Saving a virtual machine by using the command line You can save a virtual machine (VM) and its current state to the host's disk. This is useful, for example, when you need to use the host's resources for some other purpose. The saved VM can then be quickly restored to its running state. To save a VM by using the command line, follow the procedure below. Prerequisites Ensure you have sufficient disk space to save the VM and its configuration. Note that the space occupied by the VM depends on the amount of RAM allocated to that VM. Ensure the VM is persistent. Optional: Back up important data from the VM if required. Procedure Use the virsh managedsave utility. For example, the following command stops the demo-guest1 VM and saves its configuration. The saved VM file is located by default in the /var/lib/libvirt/qemu/save directory as demo-guest1.save . The time the VM is started , it will automatically restore the saved state from the above file. Verification List the VMs that have managed save enabled. In the following example, the VMs listed as saved have their managed save enabled. To list the VMs that have a managed save image: Note that to list the saved VMs that are in a shut off state, you must use the --all or --inactive options with the command. Troubleshooting If the saved VM file becomes corrupted or unreadable, restoring the VM will initiate a standard VM boot instead. Additional resources The virsh managedsave --help command Restoring a saved VM by using the command line Restoring a saved VM by using the web console 10.3. Starting a virtual machine by using the command line You can use the command line (CLI) to start a shut-down virtual machine (VM) or restore a saved VM. By using the CLI, you can start both local and remote VMs. Prerequisites An inactive VM that is already defined. The name of the VM. For remote VMs: The IP address of the host where the VM is located. Root access privileges to the host. Procedure For a local VM, use the virsh start utility. For example, the following command starts the demo-guest1 VM. For a VM located on a remote host, use the virsh start utility along with the QEMU+SSH connection to the host. For example, the following command starts the demo-guest1 VM on the 192.0.2.1 host. Additional resources The virsh start --help command Setting up easy access to remote virtualization hosts Starting virtual machines automatically when the host starts 10.4. Starting virtual machines by using the web console If a virtual machine (VM) is in the shut off state, you can start it by using the RHEL 9 web console. You can also configure the VM to be started automatically when the host starts. Prerequisites You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . An inactive VM that is already defined. The name of the VM. Procedure In the Virtual Machines interface, click the VM you want to start. A new page opens with detailed information about the selected VM and controls for shutting down and deleting the VM. Click Run . The VM starts, and you can connect to its console or graphical output . Optional: To configure the VM to start automatically when the host starts, toggle the Autostart checkbox in the Overview section. If you use network interfaces that are not managed by libvirt, you must also make additional changes to the systemd configuration. Otherwise, the affected VMs might fail to start, see starting virtual machines automatically when the host starts . Additional resources Shutting down virtual machines in the web console Restarting virtual machines by using the web console | [
"virsh managedsave demo-guest1 Domain 'demo-guest1' saved by libvirt",
"virsh list --managed-save --all Id Name State ---------------------------------------------------- - demo-guest1 saved - demo-guest2 shut off",
"virsh list --with-managed-save --all Id Name State ---------------------------------------------------- - demo-guest1 shut off",
"virsh start demo-guest1 Domain 'demo-guest1' started",
"virsh -c qemu+ssh://[email protected]/system start demo-guest1 [email protected]'s password: Domain 'demo-guest1' started"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/saving-and-restoring-virtual-machines_configuring-and-managing-virtualization |
Chapter 3. Deploying OpenShift AI in a disconnected environment | Chapter 3. Deploying OpenShift AI in a disconnected environment Read this section to understand how to deploy Red Hat OpenShift AI as a development and testing environment for data scientists in a disconnected environment. Disconnected clusters are on a restricted network, typically behind a firewall. In this case, clusters cannot access the remote registries where Red Hat provided OperatorHub sources reside. Instead, the Red Hat OpenShift AI Operator can be deployed to a disconnected environment using a private registry to mirror the images. Installing OpenShift AI in a disconnected environment involves the following high-level tasks: Confirm that your OpenShift cluster meets all requirements. See Requirements for OpenShift AI Self-Managed . Add administrative users for OpenShift. See Adding administrative users in OpenShift . Mirror images to a private registry. See Mirroring images to a private registry for a disconnected installation . Install the Red Hat OpenShift AI Operator. See Installing the Red Hat OpenShift AI Operator . Install OpenShift AI components. See Installing and managing Red Hat OpenShift AI components . Configure user and administrator groups to provide user access to OpenShift AI. See Adding users to OpenShift AI user groups . Provide your users with the URL for the OpenShift cluster on which you deployed OpenShift AI. See Accessing the OpenShift AI dashboard . Optionally, configure and enable your accelerators in OpenShift AI to ensure that your data scientists can use compute-heavy workloads in their models. See Enabling accelerators . 3.1. Requirements for OpenShift AI Self-Managed You must meet the following requirements before you can install Red Hat OpenShift AI on your Red Hat OpenShift cluster in a disconnected environment: Product subscriptions You must have a subscription for Red Hat OpenShift AI Self-Managed. Contact your Red Hat account manager to purchase new subscriptions. If you do not yet have an account manager, complete the form at https://www.redhat.com/en/contact to request one. Cluster administrator access to your OpenShift cluster You must have an OpenShift cluster with cluster administrator access. Use an existing cluster or create a cluster by following the OpenShift Container Platform documentation: Installing a cluster in a disconnected environment . After you install a cluster, configure the Cluster Samples Operator by following the OpenShift Container Platform documentation: Configuring Samples Operator for a restricted cluster . Your cluster must have at least 2 worker nodes with at least 8 CPUs and 32 GiB RAM available for OpenShift AI to use when you install the Operator. To ensure that OpenShift AI is usable, additional cluster resources are required beyond the minimum requirements. To use OpenShift AI on single node OpenShift, the node has to have at least 32 CPUs and 128 GiB RAM. Your cluster is configured with a default storage class that can be dynamically provisioned. Confirm that a default storage class is configured by running the oc get storageclass command. If no storage classes are noted with (default) beside the name, follow the OpenShift Container Platform documentation to configure a default storage class: Changing the default storage class . For more information about dynamic provisioning, see Dynamic provisioning . Open Data Hub must not be installed on the cluster. For more information about managing the machines that make up an OpenShift cluster, see Overview of machine management . An identity provider configured for OpenShift Red Hat OpenShift AI uses the same authentication systems as Red Hat OpenShift Container Platform. See Understanding identity provider configuration for more information on configuring identity providers. Access to the cluster as a user with the cluster-admin role; the kubeadmin user is not allowed. Internet access on the mirroring machine Along with Internet access, the following domains must be accessible to mirror images required for the OpenShift AI Self-Managed installation: cdn.redhat.com subscription.rhn.redhat.com registry.access.redhat.com registry.redhat.io quay.io For CUDA-based images, the following domains must be accessible: ngc.download.nvidia.cn developer.download.nvidia.com Create custom namespaces By default, OpenShift AI uses predefined namespaces, but you can define a custom namespace for the operator and DSCI.applicationNamespace as needed. Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly. If you are using custom namespaces, before installing the OpenShift AI Operator, you must have created and labeled them as required. Data science pipelines preparation Data science pipelines 2.0 contains an installation of Argo Workflows. If there is an existing installation of Argo Workflows that is not installed by data science pipelines on your cluster, data science pipelines will be disabled after you install OpenShift AI. Before installing OpenShift AI, ensure that your cluster does not have an existing installation of Argo Workflows that is not installed by data science pipelines, or remove the separate installation of Argo Workflows from your cluster. Before you can execute a pipeline in a disconnected environment, you must upload the images to your private registry. For more information, see Mirroring images to run pipelines in a restricted environment . You can store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage. To do this, you must first configure write access to your S3 bucket on your storage account. Install KServe dependencies To support the KServe component, which is used by the single-model serving platform to serve large models, you must also install Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh and perform additional configuration. For more information, see About the single-model serving platform . If you want to add an authorization provider for the single-model serving platform, you must install the Red Hat - Authorino Operator. For information, see Adding an authorization provider for the single-model serving platform . Install model registry dependencies (Technology Preview feature) To use the model registry component, you must also install Operators for Red Hat Authorino, Red Hat OpenShift Serverless, and Red Hat OpenShift Service Mesh. For more information about configuring the model registry component, see Configuring the model registry component . Access to object storage Components of OpenShift AI require or can use S3-compatible object storage such as AWS S3, MinIO, Ceph, or IBM Cloud Storage. An object store is a data storage mechanism that enables users to access their data either as an object or as a file. The S3 API is the recognized standard for HTTP-based access to object storage services. The object storage must be accessible to your OpenShift cluster. Deploy the object storage on the same disconnected network as your cluster. Object storage is required for the following components: Single- or multi-model serving platforms, to deploy stored models. See Deploying models on the single-model serving platform or Deploying a model by using the multi-model serving platform . Data science pipelines, to store artifacts, logs, and intermediate results. See Configuring a pipeline server and About pipeline logs . Object storage can be used by the following components: Workbenches, to access large datasets. See Adding a connection to your data science project . Distributed workloads, to pull input data from and push results to. See Running distributed data science workloads from data science pipelines . Code executed inside a pipeline. For example, to store the resulting model in object storage. See Overview of pipelines in Jupyterlab . 3.2. Adding administrative users in OpenShift Before you can install and configure OpenShift AI for your data scientist users, you must obtain OpenShift cluster administrator ( cluster-admin ) privileges. To assign cluster-admin privileges to a user, follow the steps in the relevant OpenShift documentation: OpenShift Container Platform: Creating a cluster admin OpenShift Dedicated: Managing OpenShift Dedicated administrators ROSA: Creating a cluster administrator user for quick cluster access 3.3. Mirroring images to a private registry for a disconnected installation You can install the Red Hat OpenShift AI Operator to your OpenShift cluster in a disconnected environment by mirroring the required container images to a private container registry. After mirroring the images to a container registry, you can install Red Hat OpenShift AI Operator by using OperatorHub. You can use the mirror registry for Red Hat OpenShift , a small-scale container registry, as a target for mirroring the required container images for OpenShift AI in a disconnected environment. Using the mirror registry for Red Hat OpenShift is optional if another container registry is already available in your installation environment. Prerequisites You have cluster administrator access to a running OpenShift Container Platform cluster, version 4.14 or greater. You have credentials for Red Hat OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). Your mirroring machine is running Linux, has 100 GB of space available, and has access to the Internet so that it can obtain the images to populate the mirror repository. You have installed the OpenShift CLI ( oc ). If you plan to use NVIDIA GPUs, you have mirrored and deployed the NVIDIA GPU Operator. See Configuring the NVIDIA GPU Operator in the OpenShift Container Platform documentation. If you plan to use data science pipelines, you have mirrored the OpenShift Pipelines operator. If you plan to use the single-model serving platform to serve large models, you have mirrored the Operators for Red Hat OpenShift Serverless and Red Hat OpenShift Service Mesh. For more information, see Serving large models . If you plan to use the distributed workloads component, you have mirrored the Ray cluster image. Note This procedure uses the oc-mirror plugin v2 (the oc-mirror plugin v1 is now deprecated). For more information, see Changes from oc-mirror plugin v1 to v2 in the OpenShift documentation. Procedure Create a mirror registry. See Creating a mirror registry with mirror registry for Red Hat OpenShift in the OpenShift Container Platform documentation. To mirror registry images, install the oc-mirror OpenShift CLI plugin v2 on your mirroring machine running Linux. See Installing the oc-mirror OpenShift CLI plugin in the OpenShift Container Platform documentation. Important The oc-mirror plugin v1 is deprecated. Red Hat recommends that you use the oc-mirror plugin v2 for continued support and improvements. Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. See Configuring credentials that allow images to be mirrored in the OpenShift Container Platform documentation. Open the example image set configuration file ( rhoai-<version>.md ) from the disconnected installer helper repository and examine its contents. Using the example image set configuration file, create a file called imageset-config.yaml and populate it with values suitable for the image set configuration in your deployment. To view a list of the available OpenShift versions, run the following command. This might take several minutes. If the command returns errors, repeat the steps in Configuring credentials that allow images to be mirrored . To see the available channels for a package in a specific version of OpenShift Container Platform (for example, 4.18), run the following command: For information about subscription update channels, see Understanding update channels . Important The example image set configurations are for demonstration purposes only and might need further alterations depending on your deployment. To identify the attributes most suitable for your deployment, examine the documentation and use cases in Mirroring images for a disconnected installation by using the oc-mirror plugin v2 . Your imageset-config.yaml should look similar to the following example, where openshift-pipelines-operator-rh is required for data science pipelines, and both serverless-operator and servicemeshoperator are required for the KServe component. Download the specified image set configuration to a local file on your mirroring machine: Replace <mirror_rhoai> with the target directory where you want to output the image set file. The target directory path must start with file:// . The download might take several minutes. Tip If the tls: failed to verify certificate: x509: certificate signed by unknown authority error is returned and you want to ignore it, set skipTLS to true in your image set configuration file and run the command again. Verify that the image set .tar files were created: Example output If an archiveSize value was specified in the image set configuration file, the image set might be separated into multiple .tar files. Optional: Verify that total size of the image set .tar files is around 75 GB: If the total size of the image set is significantly less than 75 GB, run the oc mirror command again. Upload the contents of the generated image set to your target mirror registry: Replace <mirror_rhoai> with the directory that contains your image set .tar files. Replace <registry.example.com:5000> with your mirror registry. Tip If the tls: failed to verify certificate: x509: certificate signed by unknown authority error is returned and you want to ignore it, run the following command: Log in to your target OpenShift cluster using the OpenShift CLI as a user with the cluster-admin role. Verify that the YAML files are present for the ImageDigestMirrorSet and CatalogSource resources: Replace <mirror_rhoai> with the directory that contains your image set .tar files. Example output Install the generated resources into the cluster: Replace <oc_mirror_workspace_path> with the path to your oc mirror workspace. Verification Verify that the CatalogSource and pod were created successfully: This should return at least one catalog and two pods. Check that the Red Hat OpenShift AI Operator exists in the OperatorHub: Log in to the OpenShift web console. Click Operators OperatorHub . The OperatorHub page opens. Confirm that the Red Hat OpenShift AI Operator is shown. If you mirrored additional operators, such as OpenShift Pipelines, Red Hat OpenShift Serverless, or Red Hat OpenShift Service Mesh, check that those operators exist the OperatorHub. Additional resources Mirroring images for a disconnected installation by using the oc-mirror plugin v2 3.4. Configuring custom namespaces By default, OpenShift AI uses predefined namespaces, but you can define a custom namespace for the operator and DSCI.applicationNamespace as needed. Namespaces created by OpenShift AI typically include openshift or redhat in their name. Do not rename these system namespaces because they are required for OpenShift AI to function properly. Prerequisites You have access to a OpenShift AI cluster with cluster administrator privileges. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Enter the following command to create the custom namespace: If you are creating a namespace for a DSCI.applicationNamespace , enter the following command to add the correct label: 3.5. Installing the Red Hat OpenShift AI Operator This section shows how to install the Red Hat OpenShift AI Operator on your OpenShift cluster using the command-line interface (CLI) and the OpenShift web console. Note If you want to upgrade from a version of OpenShift AI rather than performing a new installation, see Upgrading OpenShift AI in a disconnected environment . Note If your OpenShift cluster uses a proxy to access the Internet, you can configure the proxy settings for the Red Hat OpenShift AI Operator. See Overriding proxy settings of an Operator for more information. 3.5.1. Installing the Red Hat OpenShift AI Operator by using the CLI The following procedure shows how to use the OpenShift command-line interface (CLI) to install the Red Hat OpenShift AI Operator on your OpenShift cluster. You must install the Operator before you can install OpenShift AI components on the cluster. Prerequisites You have a running OpenShift cluster, version 4.14 or greater, configured with a default storage class that can be dynamically provisioned. You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have mirrored the required container images to a private registry. See Mirroring images to a private registry for a disconnected installation . Procedure Open a new terminal window. Follow these steps to log in to your OpenShift cluster as a cluster administrator: In the upper-right corner of the OpenShift web console, click your user name and select Copy login command . After you have logged in, click Display token . Copy the Log in with this token command and paste it in the OpenShift command-line interface (CLI). Create a namespace for installation of the Operator by performing the following actions: Create a namespace YAML file named rhods-operator-namespace.yaml . apiVersion: v1 kind: Namespace metadata: name: redhat-ods-operator 1 1 Defines the required redhat-ods-operator namespace for installation of the Operator. Create the namespace in your OpenShift cluster. You see output similar to the following: Create an operator group for installation of the Operator by performing the following actions: Create an OperatorGroup object custom resource (CR) file, for example, rhods-operator-group.yaml . apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhods-operator namespace: redhat-ods-operator 1 1 Defines the required redhat-ods-operator namespace. Create the OperatorGroup object in your OpenShift cluster. You see output similar to the following: Create a subscription for installation of the Operator by performing the following actions: Create a Subscription object CR file, for example, rhods-operator-subscription.yaml . apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhods-operator namespace: redhat-ods-operator 1 spec: name: rhods-operator channel: <channel> 2 source: cs-redhat-operator-index sourceNamespace: openshift-marketplace startingCSV: rhods-operator.x.y.z 3 1 Defines the required redhat-ods-operator namespace. 2 Sets the update channel. You must specify a value of fast , stable , stable-x.y eus-x.y , or alpha . For more information, see Understanding update channels . 3 Optional: Sets the operator version. If you do not specify a value, the subscription defaults to the latest operator version. For more information, see the Red Hat OpenShift AI Self-Managed Life Cycle Knowledgebase article. Create the Subscription object in your OpenShift cluster to install the Operator. You see output similar to the following: Verification In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses: Installing - installation is in progress; wait for this to change to Succeeded . This might take several minutes. Succeeded - installation is successful. In the web console, click Home Projects and confirm that the following project namespaces are visible and listed as Active : redhat-ods-applications redhat-ods-monitoring redhat-ods-operator Additional resources Installing and managing Red Hat OpenShift AI components Adding users to OpenShift AI user groups . Adding Operators to a cluster 3.5.2. Installing the Red Hat OpenShift AI Operator by using the web console The following procedure shows how to use the OpenShift web console to install the Red Hat OpenShift AI Operator on your cluster. You must install the Operator before you can install OpenShift AI components on the cluster. Prerequisites You have a running OpenShift cluster, version 4.14 or greater, configured with a default storage class that can be dynamically provisioned. You have cluster administrator privileges for your OpenShift cluster. You have mirrored the required container images to a private registry. See Mirroring images to a private registry for a disconnected installation . Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators OperatorHub . On the OperatorHub page, locate the Red Hat OpenShift AI Operator by scrolling through the available Operators or by typing Red Hat OpenShift AI into the Filter by keyword box. Click the Red Hat OpenShift AI tile. The Red Hat OpenShift AI information pane opens. Select a Channel . For information about subscription update channels, see Understanding update channels . Select a Version . Click Install . The Install Operator page opens. Review or change the selected channel and version as needed. For Installation mode , note that the only available value is All namespaces on the cluster (default) . This installation mode makes the Operator available to all namespaces in the cluster. For Installed Namespace , select Operator recommended Namespace: redhat-ods-operator . For Update approval , select one of the following update strategies: Automatic : Your environment attempts to install new updates when they are available based on the content of your mirror. Manual : A cluster administrator must approve any new updates before installation begins. Important By default, the Red Hat OpenShift AI Operator follows a sequential update process. This means that if there are several versions between the current version and the target version, Operator Lifecycle Manager (OLM) upgrades the Operator to each of the intermediate versions before it upgrades it to the final, target version. If you configure automatic upgrades, OLM automatically upgrades the Operator to the latest available version. If you configure manual upgrades, a cluster administrator must manually approve each sequential update between the current version and the final, target version. For information about supported versions, see the Red Hat OpenShift AI Life Cycle Knowledgebase article. Click Install . The Installing Operators pane appears. When the installation finishes, a checkmark appears to the Operator name. Verification In the OpenShift web console, click Operators Installed Operators and confirm that the Red Hat OpenShift AI Operator shows one of the following statuses: Installing - installation is in progress; wait for this to change to Succeeded . This might take several minutes. Succeeded - installation is successful. In the web console, click Home Projects and confirm that the following project namespaces are visible and listed as Active : redhat-ods-applications redhat-ods-monitoring redhat-ods-operator Additional resources Installing and managing Red Hat OpenShift AI components Adding users to OpenShift AI user groups Adding Operators to a cluster 3.6. Installing and managing Red Hat OpenShift AI components You can use the OpenShift command-line interface (CLI) or OpenShift web console to install and manage components of Red Hat OpenShift AI on your OpenShift cluster. 3.6.1. Installing Red Hat OpenShift AI components by using the CLI To install Red Hat OpenShift AI components by using the OpenShift command-line interface (CLI), you must create and configure a DataScienceCluster object. Important The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation. For information about changing the installation status of OpenShift AI components after installation, see Updating the installation status of Red Hat OpenShift AI components by using the web console . For information about upgrading OpenShift AI, see Upgrading OpenShift AI Self-Managed in a disconnected environment . Prerequisites The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator . You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . Procedure Open a new terminal window. Follow these steps to log in to your OpenShift cluster as a cluster administrator: In the upper-right corner of the OpenShift web console, click your user name and select Copy login command . After you have logged in, click Display token . Copy the Log in with this token command and paste it in the OpenShift command-line interface (CLI). Create a DataScienceCluster object custom resource (CR) file, for example, rhods-operator-dsc.yaml . apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed 1 2 kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed 1 To fully install the KServe component, which is used by the single-model serving platform to serve large models, you must install Operators for Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless and perform additional configuration. See Installing the single-model serving platform . 2 If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed . These values are defined as follows: Managed The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so. Removed The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it. Important To learn how to fully install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the single-model serving platform . If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . To learn how to install the distributed workloads components, see Installing the distributed workloads components . To learn how to run distributed workloads in a disconnected environment, see Running distributed data science workloads in a disconnected environment . Create the DataScienceCluster object in your OpenShift cluster to install the specified OpenShift AI components. You see output similar to the following: Verification Confirm that there is a running pod for each component: In the OpenShift web console, click Workloads Pods . In the Project list at the top of the page, select redhat-ods-applications . In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed. Confirm the status of all installed components: In the OpenShift web console, click Operators Installed Operators . Click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc . Select the YAML tab. In the installedComponents section, confirm that the components you installed have a status value of true . Note If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed. 3.6.2. Installing Red Hat OpenShift AI components by using the web console To install Red Hat OpenShift AI components by using the OpenShift web console, you must create and configure a DataScienceCluster object. Important The following procedure describes how to create and configure a DataScienceCluster object to install Red Hat OpenShift AI components as part of a new installation. For information about changing the installation status of OpenShift AI components after installation, see Updating the installation status of Red Hat OpenShift AI components by using the web console . For information about upgrading OpenShift AI, see Upgrading OpenShift AI Self-Managed in a disconnected environment . Prerequisites The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. See Installing the Red Hat OpenShift AI Operator . You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. Click Create DataScienceCluster . For Configure via , select YAML view . An embedded YAML editor opens showing a default custom resource (CR) for the DataScienceCluster object, similar to the following example: apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed 1 2 kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed 1 To fully install the KServe component, which is used by the single-model serving platform to serve large models, you must install Operators for Red Hat OpenShift Service Mesh and Red Hat OpenShift Serverless and perform additional configuration. See Installing the single-model serving platform . 2 If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed . These values are defined as follows: Managed The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so. Removed The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it. Important To learn how to fully install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the single-model serving platform . If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . To learn how to install the distributed workloads components, see Installing the distributed workloads components . To learn how to run distributed workloads in a disconnected environment, see Running distributed data science workloads in a disconnected environment . Click Create . Verification Confirm that there is a running pod for each component: In the OpenShift web console, click Workloads Pods . In the Project list at the top of the page, select redhat-ods-applications . In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed. Confirm the status of all installed components: In the OpenShift web console, click Operators Installed Operators . Click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc . Select the YAML tab. In the installedComponents section, confirm that the components you installed have a status value of true . Note If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed. 3.6.3. Updating the installation status of Red Hat OpenShift AI components by using the web console You can use the OpenShift web console to update the installation status of components of Red Hat OpenShift AI on your OpenShift cluster. Important If you upgraded OpenShift AI, the upgrade process automatically used the values of the version's DataScienceCluster object. New components are not automatically added to the DataScienceCluster object. After upgrading OpenShift AI: Inspect the default DataScienceCluster object to check and optionally update the managementState status of the existing components. Add any new components to the DataScienceCluster object. Prerequisites The Red Hat OpenShift AI Operator is installed on your OpenShift cluster. You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift web console as a cluster administrator. In the web console, click Operators Installed Operators and then click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab. On the DataScienceClusters page, click the default object. Click the YAML tab. An embedded YAML editor opens showing the default custom resource (CR) for the DataScienceCluster object, similar to the following example: apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed In the spec.components section of the CR, for each OpenShift AI component shown, set the value of the managementState field to either Managed or Removed . These values are defined as follows: Managed The Operator actively manages the component, installs it, and tries to keep it active. The Operator will upgrade the component only if it is safe to do so. Removed The Operator actively manages the component but does not install it. If the component is already installed, the Operator will try to remove it. Important To learn how to install the KServe component, which is used by the single-model serving platform to serve large models, see Installing the single-model serving platform . If you have not enabled the KServe component (that is, you set the value of the managementState field to Removed ), you must also disable the dependent Service Mesh component to avoid errors. See Disabling KServe dependencies . To learn how to install the distributed workloads feature, see Installing the distributed workloads components . To learn how to run distributed workloads in a disconnected environment, see Running distributed data science workloads in a disconnected environment . Click Save . For any components that you updated, OpenShift AI initiates a rollout that affects all pods to use the updated image. Verification Confirm that there is a running pod for each component: In the OpenShift web console, click Workloads Pods . In the Project list at the top of the page, select redhat-ods-applications . In the applications namespace, confirm that there are running pods for each of the OpenShift AI components that you installed. Confirm the status of all installed components: In the OpenShift web console, click Operators Installed Operators . Click the Red Hat OpenShift AI Operator. Click the Data Science Cluster tab and select the DataScienceCluster object called default-dsc . Select the YAML tab. In the installedComponents section, confirm that the components you installed have a status value of true . Note If a component shows with the component-name: {} format in the spec.components section of the CR, the component is not installed. | [
"oc-mirror list operators",
"oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.18 --package=<package_name>",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: rhods-operator defaultChannel: fast channels: - name: fast minVersion: 2.18.0 maxVersion: 2.18.0 - name: openshift-pipelines-operator-rh channels: - name: stable - name: serverless-operator channels: - name: stable - name: servicemeshoperator channels: - name: stable",
"oc mirror -c imageset-config.yaml file://<mirror_rhoai> --v2",
"ls <mirror_rhoai>",
"mirror_000001.tar, mirror_000002.tar",
"du -h --max-depth=1 ./<mirror_rhoai>/",
"oc mirror -c imageset-config.yaml --from file://<mirror_rhoai> docker://<registry.example.com:5000> --v2",
"oc mirror --dest-tls-verify false --from=./<mirror_rhoai> docker://<registry.example.com:5000> --v2",
"ls <mirror_rhoai>/working-dir/cluster-resources/",
"cs-redhat-operator-index.yaml idms-oc-mirror.yaml",
"oc apply -f <oc_mirror_workspace_path>/working-dir/cluster-resources",
"oc get catalogsource,pod -n openshift-marketplace",
"login <openshift_cluster_url> -u <admin_username> -p <password>",
"create namespace <custom_namespace>",
"label namespace <application_namespace> opendatahub.io/application-namespace=true",
"oc login --token= <token> --server= <openshift_cluster_url>",
"apiVersion: v1 kind: Namespace metadata: name: redhat-ods-operator 1",
"oc create -f rhods-operator-namespace.yaml",
"namespace/redhat-ods-operator created",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhods-operator namespace: redhat-ods-operator 1",
"oc create -f rhods-operator-group.yaml",
"operatorgroup.operators.coreos.com/rhods-operator created",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhods-operator namespace: redhat-ods-operator 1 spec: name: rhods-operator channel: <channel> 2 source: cs-redhat-operator-index sourceNamespace: openshift-marketplace startingCSV: rhods-operator.x.y.z 3",
"oc create -f rhods-operator-subscription.yaml",
"subscription.operators.coreos.com/rhods-operator created",
"oc login --token= <token> --server= <openshift_cluster_url>",
"apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed 1 2 kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed",
"oc create -f rhods-operator-dsc.yaml",
"datasciencecluster.datasciencecluster.opendatahub.io/default created",
"apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed 1 2 kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed",
"apiVersion: datasciencecluster.opendatahub.io/v1 kind: DataScienceCluster metadata: name: default-dsc spec: components: codeflare: managementState: Removed dashboard: managementState: Removed datasciencepipelines: managementState: Removed kserve: managementState: Removed kueue: managementState: Removed modelmeshserving: managementState: Removed ray: managementState: Removed trainingoperator: managementState: Removed trustyai: managementState: Removed workbenches: managementState: Removed"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed_in_a_disconnected_environment/deploying-openshift-ai-in-a-disconnected-environment_install |
Chapter 3. Deploy standalone Multicloud Object Gateway | Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector="
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_on_any_platform/deploy-standalone-multicloud-object-gateway |
Chapter 6. Mirroring data for hybrid and Multicloud buckets | Chapter 6. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 3, Adding storage resources for hybrid or Multicloud . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim . | [
"noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror",
"noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws",
"apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror",
"additionalConfig: bucketclass: mirror-to-aws"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/mirroring-data-for-hybrid-and-Multicloud-buckets |
7.7. ip6tables | 7.7. ip6tables The introduction of the -generation Internet Protocol, called IPv6, expands beyond the 32-bit address limit of IPv4 (or IP). IPv6 supports 128-bit addresses and, as such, carrier networks that are IPv6 aware are able to address a larger number of routable addresses than IPv4. Red Hat Enterprise Linux supports IPv6 firewall rules using the Netfilter 6 subsystem and the ip6tables command. The first step in using ip6tables is to start the ip6tables service. This can be done with the command: Warning The iptables services must be turned off to use the ip6tables service exclusively: To make ip6tables start by default whenever the system is booted, change the runlevel status on the service using chkconfig . The syntax is identical to iptables in every aspect except that ip6tables supports 128-bit addresses. For example, SSH connections on a IPv6-aware network server can be enabled with the following rule: For more information about IPv6 networking, refer to the IPv6 Information Page at http://www.ipv6.org/ . | [
"service ip6tables start",
"service iptables stop chkconfig iptables off",
"chkconfig --level 345 ip6tables on",
"ip6tables -A INPUT -i eth0 -p tcp -s 3ffe:ffff:100::1/128 --dport 22 -j ACCEPT"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-firewall-ip6t |
4.73. gmp | 4.73. gmp 4.73.1. RHBA-2012:0365 - gmp bug fix update An updated gmp package that fixes one bug is now available for Red Hat Enterprise Linux 6. The gmp package contains GNU MP, a library for arbitrary precision arithmetic, signed integers operations, rational numbers and floating point numbers. GNU MP is designed for speed, for both small and very large operands. Bug Fix BZ# 798771 Previously, the interface provided by the gmp library was changed. This resulted in one exported symbol being absent in Red Hat Enterprise Linux 6 (when compared to the Red Hat Enterprise Linux 5 system). In addition, the symbol could have been reported as missing under certain circumstances. To fix this problem, this update adds the missing symbol back to the library. All users of gmp are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gmp |
Chapter 1. Introduction to Federal Information Processing Standards (FIPS) | Chapter 1. Introduction to Federal Information Processing Standards (FIPS) The Federal Information Processing Standards (FIPS) provides guidelines and requirements for improving security and interoperability across computer systems and networks. The FIPS 140-2 and 140-3 series apply to cryptographic modules at both the hardware and software levels. The National Institute of Standards and Technology in the United States implements a cryptographic module validation program with searchable lists of both in-process and approved cryptographic modules. Red Hat Enterprise Linux (RHEL) brings an integrated framework to enable FIPS 140 compliance system-wide. When operating under FIPS mode, software packages using cryptographic libraries are self-configured according to the global policy. Most of the packages provide a way to change the default alignment behavior for compatibility or other needs. Red Hat build of OpenJDK 21 is a FIPS policy-aware package. Additional resources For more information about the cryptographic module validation program, see Cryptographic Module Validation Program CMVP on the National Institute of Standards and Technology website. For more information on how to install RHEL with FIPS mode enabled, see Installing a RHEL 8 system with FIPS mode enabled . For more information on how to enable FIPS mode after installing RHEL, see Switching the system to FIPS mode . For more information on how to run Red Hat build of OpenJDK in FIPS mode on RHEL. See Running OpenJDK in FIPS mode on RHEL . For more information on Red Hat compliance with Government Standards, see Government Standards . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/configuring_red_hat_build_of_openjdk_21_on_rhel_with_fips/about-fips |
Chapter 5. Integrating OpenStack Identity (keystone) with Active Directory | Chapter 5. Integrating OpenStack Identity (keystone) with Active Directory You can integrate OpenStack Identity (keystone) with Microsoft Active Directory Domain Service (AD DS). Identity Service authenticates certain Active Directory Domain Services (AD DS) users but retains authorization settings and critical service accounts in the Identity Service database. As a result, Identity Service has read-only access to AD DS for user account authentication and continues to manage privileges assigned to authenticated accounts. By integrating the Identity service with AD DS, you allow AD DS users to authenticate to Red Hat OpenStack Platform (RHOSP) to access resources. RHOSP service accounts, such as the Identity Service and the Image service, and authorization management remain in the Identity Service database. Permissions and roles are assigned to the AD DS accounts using Identity Service management tools. The process to integrate OpenStack Identity with Active Directory includes the following stages: Configure Active Directory credentials and export the LDAPS certificate Install and configure the LDAPS certificate in OpenStack Configure director to use one or more LDAP backends Configure Controller nodes to access the Active Directory backend Configure Active Directory user or group access to OpenStack projects Verify that the domain and user lists are created correctly Optional: Create credential files for non-admin users. 5.1. Configuring Active Directory credentials To configure Active Directory Domain Service (AD DS) to integrate with OpenStack Identity, set up an LDAP account for Identity service to use, create a user group for Red Hat OpenStack users, and export the LDAPS certificate public key to use in the Red Hat OpenStack Platform deployment. Prerequisites Active Directory Domain Services is configured and operational. Red Hat OpenStack Platform is configured and operational. DNS name resolution is fully functional and all hosts are registered appropriately. AD DS authentication traffic is encrypted with LDAPS, using port 636. Recommended: Implement AD DS with a high availability or load balancing solution to avoid a single point of failure. Procedure Perform these steps on the Active Directory server. Create the LDAP lookup account. This account is used by Identity Service to query the AD DS LDAP service: Set a password for this account, and then enable it. You will be prompted to specify a password that complies with your AD domain's complexity requirements: Create a group for RHOSP users, called grp-openstack . Only members of this group can have permissions assigned in OpenStack Identity. Create the Project groups: Add the svc-ldap user to the grp-openstack group: From an AD Domain Controller, use a Certificates MMC to export your LDAPS certificate's public key (not the private key) as a DER-encoded x509 .cer file. Send this file to the RHOSP administrators. Retrieve the NetBIOS name of your AD DS domain. Send this value to the RHOSP administrators. 5.2. Installing the Active Directory LDAPS certificate OpenStack Identity (keystone) uses LDAPS queries to validate user accounts. To encrypt this traffic, keystone uses the certificate file defined by keystone.conf . To configure the LDAPS certificate, convert the public key received from Active Directory into the .crt format and copy the certificate to a location where keystone will be able to reference it. Note When using multiple domains for LDAP authentication, you might receive various errors, such as Unable to retrieve authorized projects , or Peer's Certificate issuer is not recognized . This can arise if keystone uses the incorrect certificate for a certain domain. As a workaround, merge all of the LDAPS public keys into a single .crt bundle, and configure all of your keystone domains to use this file. Prerequisites Active Directory credentials are configured. LDAPS certificate is exported from Active Directory. Procedure Copy the LDAPS public key to the node running OpenStack Identity and convert the .cer to .crt . This example uses a source certificate file named addc.lab.local.cer : Optional: If you need to run diagnostic commands, such as ldapsearch , you also need to add the certificate to the RHEL certificate store: Convert the .cer to .pem . This example uses a source certificate file named addc.lab.local.cer : Install the .pem on the Controller node. For example, in Red Hat Enterprise Linux: 5.3. Configuring director to use domain-specific LDAP backends To configure director to use one or more LDAP backends, set the KeystoneLDAPDomainEnable flag to true in your heat templates, and set up environment files with the information about each LDAP backend. Director then uses a separate LDAP backend for each keystone domain. Note The default directory for domain configuration files is set to /etc/keystone/domains/ . You can override this by setting the required path with the keystone::domain_config_directory hiera key and adding it as an ExtraConfig parameter within an environment file. Procedure In the heat template for your deployment, set the KeystoneLDAPDomainEnable flag to true . This configures the domain_specific_drivers_enabled option in keystone within the identity configuration group. Add a specification of the LDAP backend configuration by setting the KeystoneLDAPBackendConfigs parameter in tripleo-heat-templates , where you can then specify your required LDAP options. Create a copy of the keystone_domain_specific_ldap_backend.yaml environment file: Edit the /home/stack/templates/keystone_domain_specific_ldap_backend.yaml environment file and set the values to suit your deployment. For example, this parameter create a LDAP configuration for a keystone domain named testdomain : Note Ensure that you set the value of the use_pool parameter to a value of False . Using LDAP pools can cause TIMEOUT or SERVERDOWN errors when the pool size is exceeded. The keystone_domain_specific_ldap_backend.yaml environment file contains the following deprecated parameters that have no effect on the deployment, and can be safely removed: user_allow_create user_allow_update user_allow_delete Optional: Add more domains to the environment file. For example: This results in two domains named domain1 and domain2 ; each will have a different LDAP domain with its own configuration. 5.4. Granting the admin user access to the OpenStack Identity domain To allow the admin user to access the OpenStack Identity (keystone) domain and see the Domain tab, get the ID of the domain and the admin user, and then assign the admin role to the user in the domain. Note This does not grant the OpenStack admin account any permissions on the external service domain. In this case, the term domain refers to OpenStack's usage of the keystone domain. Procedure This procedure uses the LAB domain. Replace the domain name with the actual name of the domain that you are configuring. Get the ID of the LAB domain: Get the ID of the admin user from the default domain: Get the ID of the admin role: The output depends on the external service you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Use the domain and admin IDs to construct the command that adds the admin user to the admin role of the keystone LAB domain: 5.5. Granting external groups access to Red Hat OpenStack Platform projects To grant multiple authenticated users access to Red Hat OpenStack Platform (RHOSP) resources, you can authorize certain groups from the external user management service to grant access to RHOSP projects, instead of requiring OpenStack administrators to manually allocate each user to a role in a project. As a result, all members of these groups can access pre-determined projects. Prerequisites Ensure that the external service administrator completed the following steps: Creating a group named grp-openstack-admin . Creating a group named grp-openstack-demo . Adding your RHOSP users to one of these groups as needed. Adding your users to the grp-openstack group. Create the OpenStack Identity domain. This procedure uses the LAB domain. Create or choose a RHOSP project. This procedure uses a project called demo that was created with the openstack project create --domain default --description "Demo Project" demo command. Procedure Retrieve a list of user groups from the OpenStack Identity domain: The command output depends on the external user management service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Retrieve a list of roles: The command output depends on the external user management service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Grant the user groups access to RHOSP projects by adding them to one or more of these roles. For example, if you want users in the grp-openstack-demo group to be general users of the demo project, you must add the group to the member or _member_ role, depending on the external service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Result Members of grp-openstack-demo can log in to the dashboard by entering their username and password and entering LAB in the Domain field: Note If users receive the error Error: Unable to retrieve container list. , and expect to be able to manage containers, then they must be added to the SwiftOperator role. Additional resources Section 5.6, "Granting external users access to Red Hat OpenStack Platform projects" 5.6. Granting external users access to Red Hat OpenStack Platform projects To grant specific authenticated users from the grp-openstack group access to OpenStack resources, you can grant these users direct access to Red Hat OpenStack Platform (RHOSP) projects. Use this process in cases where you want to grant access to individual users instead of granting access to groups. Prerequisites Ensure that the external service administrator completed the following steps: Adding your RHOSP users to the grp-openstack group. Creating the OpenStack Identity domain. This procedure uses the LAB domain. Create or choose a RHOSP project. This procedure uses a project called demo that was created with the openstack project create --domain default --description "Demo Project" demo command. Procedure Retrieve a list of users from the OpenStack Identity domain: Retrieve a list of roles: The command output depends on the external user management service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): Grant users access to RHOSP projects by adding them to one or more of these roles. For example, if you want user1 to be a general user of the demo project, you add them to the member or _member_ role, depending on the external service that you are integrating with: Active Directory Domain Service (AD DS): Red Hat Identity Manager (IdM): If you want user1 to be an administrative user of the demo project, add the user to the admin role: Result The user1 user is able to log in to the dashboard by entering their external username and password and entering LAB in the Domain field: Note If users receive the error Error: Unable to retrieve container list. , and expect to be able to manage containers, then they must be added to the SwiftOperator role. Additional resources Section 5.5, "Granting external groups access to Red Hat OpenStack Platform projects" 5.7. Viewing the list of OpenStack Identity domains and users Use the openstack domain list command to list the available entries. Configuring multiple domains in Identity Service enables a new Domain field in the dashboard login page. Users are expected to enter the domain that matches their login credentials. Important After you complete the integration, you need to decide whether to create new projects in the Default domain or in newly created keystone domains. You must consider your workflow and how you administer user accounts. If possible, use the Default domain as an internal domain to manage service accounts and the admin project, and keep your external users in a separate domain. In this example, external accounts need to specify the LAB domain. The built-in keystone accounts, such as admin , must specify Default as their domain. Procedure Show the list of domains: Show the list of users in a specific domain. This command example specifies the --domain LAB and returns users in the LAB domain that are members of the grp-openstack group: You can also append --domain Default to show the built-in keystone accounts: 5.8. Creating a credentials file for a non-admin user After you configure users and domains for OpenStack Identity, you might need to create a credentials file for a non-admin user. Procedure Create a credentials (RC) file for a non-admin user. This example uses the user1 user in the file. 5.9. Testing OpenStack Identity integration with an external user management service To test that OpenStack Identity (keystone) successfully integrated with Active Directory Domain Service (AD DS), test user access to dashboard features. Prerequisites Integration with an external user management service, such as Active Directory (AD) or Red Hat Identity Manager (IdM) Procedure Create a test user in the external user management service, and add the user to the grp-openstack group. In Red Hat OpenStack Platform, add the user to the _member_ role of the demo project. Log in to the dashboard with the credentials of the AD test user. Click on each of the tabs to confirm that they are presented successfully without error messages. Use the dashboard to build a test instance. Note If you experience issues with these steps, log in to the dashboard with the admin account and perform the subsequent steps as that user. If the test is successful, it means that OpenStack is still working as expected and that an issue exists somewhere in the integration settings between OpenStack Identity and Active Directory. Additional resources Section 5.10, "Troubleshooting Active Directory integration" 5.10. Troubleshooting Active Directory integration If you encounter errors when using the Active Directory integration with OpenStack Identity, you might need to test the LDAP connection or test the certificate trust configuration. You might also need to check that the LDAPS port is accessible. Note Depending on the error type and location, perform only the relevant steps in this procedure. Procedure Test the LDAP connection by using the ldapsearch command to remotely perform test queries against the Active Directory Domain Controller. A successful result indicates that network connectivity is working, and the AD DS services are up. In this example, a test query is performed against the server 192.0.2.250 on port 636 : Note ldapsearch is a part of the openldap-clients package. You can install this using # dnf install openldap-clients This command expects to find the necessary certificate in your host operating system. If you receive the error Peer's Certificate issuer is not recognized. while testing the ldapsearch command, confirm that your TLS_CACERTDIR path is correctly set. For example: As a temporary workaround, consider disabling certificate validation. Important This setting must not be permanently configured. In the /etc/openldap/ldap.conf , set the TLS_REQCERT parameter to allow : If the ldapsearch query works after setting this value, you might need to review whether your certificate trusts are correctly configured. Use the nc command to check that LDAPS port 636 is remotely accessible. In this example, a probe is performed against the server addc.lab.local . Press ctrl-c to exit the prompt. Failure to establish a connection might indicate a firewall configuration issue. | [
"PS C:\\> New-ADUser -SamAccountName svc-ldap -Name \"svc-ldap\" -GivenName LDAP -Surname Lookups -UserPrincipalName [email protected] -Enabled USDfalse -PasswordNeverExpires USDtrue -Path 'OU=labUsers,DC=lab,DC=local'",
"PS C:\\> Set-ADAccountPassword svc-ldap -PassThru | Enable-ADAccount",
"PS C:\\> NEW-ADGroup -name \"grp-openstack\" -groupscope Global -path \"OU=labUsers,DC=lab,DC=local\"",
"PS C:\\> NEW-ADGroup -name \"grp-openstack-demo\" -groupscope Global -path \"OU=labUsers,DC=lab,DC=local\" PS C:\\> NEW-ADGroup -name \"grp-openstack-admin\" -groupscope Global -path \"OU=labUsers,DC=lab,DC=local\"",
"PS C:\\> ADD-ADGroupMember \"grp-openstack\" -members \"svc-ldap\"",
"PS C:\\> Get-ADDomain | select NetBIOSName NetBIOSName ----------- LAB",
"openssl x509 -inform der -in addc.lab.local.cer -out addc.lab.local.crt cp addc.lab.local.crt /etc/pki/ca-trust/source/anchors",
"openssl x509 -inform der -in addc.lab.local.cer -out addc.lab.local.pem",
"cp addc.lab.local.pem /etc/pki/ca-trust/source/anchors/ update-ca-trust",
"cp /usr/share/openstack-tripleo-heat-templates/environments/services/keystone_domain_specific_ldap_backend.yaml /home/stack/templates/",
"parameter_defaults: KeystoneLDAPDomainEnable: true KeystoneLDAPBackendConfigs: use_pool: False testdomain: url: ldaps://192.0.2.250 user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword suffix: dc=director,dc=example,dc=com user_tree_dn: ou=Users,dc=director,dc=example,dc=com user_filter: \"(memberOf=cn=OSuser,ou=Groups,dc=director,dc=example,dc=com)\" user_objectclass: person user_id_attribute: cn",
"KeystoneLDAPBackendConfigs: domain1: url: ldaps://domain1.example.com user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword domain2: url: ldaps://domain2.example.com user: cn=openstack,ou=Users,dc=director,dc=example,dc=com password: RedactedComplexPassword",
"openstack domain show LAB +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | 6800b0496429431ab1c4efbb3fe810d4 | | name | LAB | +---------+----------------------------------+",
"openstack user list --domain default | grep admin | 3d75388d351846c6a880e53b2508172a | admin |",
"openstack role list",
"+----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+",
"+----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 544d48aaffde48f1b3c31a52c35f01f9 | SwiftOperator | | 6d005d783bf0436e882c55c62457d33d | ResellerAdmin | | 785c70b150ee4c778fe4de088070b4cf | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | +----------------------------------+---------------+",
"openstack role add --domain 6800b0496429431ab1c4efbb3fe810d4 --user 3d75388d351846c6a880e53b2508172a 785c70b150ee4c778fe4de088070b4cf",
"openstack group list --domain LAB",
"+------------------------------------------------------------------+---------------------+ | ID | Name | +------------------------------------------------------------------+---------------------+ | 185277be62ae17e498a69f98a59b66934fb1d6b7f745f14f5f68953a665b8851 | grp-openstack | | a8d17f19f464c4548c18b97e4aa331820f9d3be52654aa8094e698a9182cbb88 | grp-openstack-admin | | d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 | grp-openstack-demo | +------------------------------------------------------------------+---------------------+",
"+------------------------------------------------------------------+---------------------+ | ID | Name | +------------------------------------------------------------------+---------------------+ | 185277be62ae17e498a69f98a59b66934fb1d6b7f745f14f5f68953a665b8851 | grp-openstack | | a8d17f19f464c4548c18b97e4aa331820f9d3be52654aa8094e698a9182cbb88 | grp-openstack-admin | | d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 | grp-openstack-demo | +------------------------------------------------------------------+---------------------+",
"openstack role list",
"+----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+",
"+----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 0969957bce5e4f678ca6cef00e1abf8a | ResellerAdmin | | 1fcb3c9b50aa46ee8196aaaecc2b76b7 | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | d3570730eb4b4780a7fed97eba197e1b | SwiftOperator | +----------------------------------+---------------+",
"openstack role add --project demo --group d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 member",
"openstack role add --project demo --group d971bb3bd5e64a454cbd0cc7af4c0773e78d61b5f81321809f8323216938cae8 _member_",
"openstack user list --domain LAB +------------------------------------------------------------------+----------------+ | ID | Name | +------------------------------------------------------------------+----------------+ | 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e | user1 | | 12c062faddc5f8b065434d9ff6fce03eb9259537c93b411224588686e9a38bf1 | user2 | | afaf48031eb54c3e44e4cb0353f5b612084033ff70f63c22873d181fdae2e73c | user3 | | e47fc21dcf0d9716d2663766023e2d8dc15a6d9b01453854a898cabb2396826e | user4 | +------------------------------------------------------------------+----------------+",
"openstack role list",
"+----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+",
"+----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 0969957bce5e4f678ca6cef00e1abf8a | ResellerAdmin | | 1fcb3c9b50aa46ee8196aaaecc2b76b7 | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | d3570730eb4b4780a7fed97eba197e1b | SwiftOperator | +----------------------------------+---------------+",
"openstack role add --project demo --user 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e member",
"openstack role add --project demo --user 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e _member_",
"openstack role add --project demo --user 1f24ec1f11aeb90520079c29f70afa060d22e2ce92b2eba7784c841ac418091e admin",
"openstack domain list +----------------------------------+---------+---------+----------------------------------------------------------------------+ | ID | Name | Enabled | Description | +----------------------------------+---------+---------+----------------------------------------------------------------------+ | 6800b0496429431ab1c4efbb3fe810d4 | LAB | True | | | default | Default | True | Owns users and projects available on Identity API v2. | +----------------------------------+---------+---------+----------------------------------------------------------------------+",
"openstack user list --domain LAB",
"openstack user list --domain Default",
"cat overcloudrc-v3-user1 Clear any old environment that may conflict. for key in USD( set | awk '{FS=\"=\"} /^OS_/ {print USD1}' ); do unset USDkey ; done export OS_USERNAME=user1 export NOVA_VERSION=1.1 export OS_PROJECT_NAME=demo export OS_PASSWORD=RedactedComplexPassword export OS_NO_CACHE=True export COMPUTE_API_VERSION=1.1 export no_proxy=,10.0.0.5,192.168.2.11 export OS_CLOUDNAME=overcloud export OS_AUTH_URL=https://10.0.0.5:5000/v3 export OS_AUTH_TYPE=password export PYTHONWARNINGS=\"ignore:Certificate has no, ignore:A true SSLContext object is not available\" export OS_IDENTITY_API_VERSION=3 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=LAB",
"ldapsearch -Z -x -H ldaps://192.0.2.250:636 -D \"cn=openstack,ou=Users,dc=director,dc=example,dc=com\" -W -b \"ou=Users,dc=director,dc=example,dc=com\" -s sub \"(memberOf=cn=OSuser,ou=Groups,dc=director,dc=example,dc=com)\"",
"TLS_CACERTDIR /etc/openldap/certs",
"TLS_REQCERT allow",
"nc -v addc.lab.local 636 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 192.168.200.10:636. ^C"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/integrating_openstack_identity_with_external_user_management_services/assembly-integrating-identity-with-active-directory_identity-providers |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_the_amq_streams_kafka_bridge/making-open-source-more-inclusive |
Chapter 67. UsedNodePoolStatus schema reference | Chapter 67. UsedNodePoolStatus schema reference Used in: KafkaStatus Property Description name The name of the KafkaNodePool used by this Kafka resource. string | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-usednodepoolstatus-reference |
Appendix A. Reference material | Appendix A. Reference material A.1. Example task summary report The following is an example of the Task Summary report. A.2. Example HTML report The following is an example of the JBoss Server Migration HTML report. Figure A.1. Example: Overview of sections Figure A.2. Example: Tasks detail page A.3. Example XML report The following is an example of the JBoss Server Migration XML report. <?xml version="1.0" ?> <server-migration-report xmlns="urn:jboss:server-migration:1.0" start-time="Mon, 30 Oct 2023 16:13:30 UTC"> <servers> <source name="EAP" version="7.4.0.GA" base-dir="/home/username/tools/jboss-eap-7.4"/> <target name="JBoss EAP" version="8.0.0.GA" base-dir="/home/username/tools/jboss-eap-8.0"/> </servers> <environment> <property name="baseDir" value="/home/username/tools/jboss-eap-8.0/migration"/> <property name="deployments.migrate-deployment-scanner-deployments.processedDeploymentScannerDirs" value="/home/username/tools/jboss-eap-7.4/standalone/deployments"/> <property name="report.html.fileName" value="migration-report.html"/> <property name="report.html.maxTaskPathSizeToDisplaySubtasks" value="4"/> <property name="report.html.templateFileName" value="migration-report-template.html"/> <property name="report.summary.maxTaskPathSizeToDisplaySubtasks" value="3"/> <property name="report.xml.fileName" value="migration-report.xml"/> <property name="subsystem.ee.update.setup-javaee7-default-bindings.defaultDataSourceName" value="ExampleDS"/> <property name="subsystem.ee.update.setup-javaee7-default-bindings.defaultJmsConnectionFactoryName" value="hornetq-ra"/> <property name="subsystem.logging.update.remove-console-handler.skip" value="true"/> </environment> <task number="1" name="server"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#1"/> <result status="SUCCESS"/> <subtasks> <task number="2" name="modules.migrate-modules-requested-by-user"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#2"/> <result status="SKIPPED"/> </task> <task number="3" name="standalone"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#3"/> <result status="SUCCESS"/> <subtasks> <task number="4" name="contents.standalone.migrate-content-dir"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#4"/> <result status="SKIPPED"/> </task> <task number="5" name="standalone-configurations"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#5"/> <result status="SUCCESS"/> <subtasks> <task number="6" name="standalone-configuration(source=/home/username/tools/jboss-eap-6.4/standalone/configuration/standalone-full-ha.xml)"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#6"/> <result status="SUCCESS"/> <subtasks> <task number="7" name="subsystems.remove-unsupported-subsystems"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#7"/> <result status="SUCCESS"/> <subtasks> <task number="8" name="subsystems.remove-unsupported-subsystems.remove-unsupported-extension(module=org.jboss.as.cmp)"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#8"/> <result status="SUCCESS"/> </task> </substasks> </task> ... <task number="644" name="hosts"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#644"/> <result status="SUCCESS"/> ... <subtasks> ... <task number="645" name="host(name=master)"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#645"/> <result status="SUCCESS"/> <subtasks> ... <task number="661" name="security-realms.migrate-properties"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#661"/> <result status="SUCCESS"/> <subtasks> <task number="662" name="security-realm.ManagementRealm.migrate-properties"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#662"/> <result status="SUCCESS"/> </task> <task number="663" name="security-realm.ApplicationRealm.migrate-properties"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#663"/> <result status="SUCCESS"/> </task> </subtasks> </task> <task number="664" name="security-realm.ApplicationRealm.add-ssl-server-identity"> <logger logger="org.jboss.migration.core.task.ServerMigrationTask#664"/> <result status="SUCCESS"/> </task> </subtasks> </task> </subtasks> </task> </subtasks> </task> </subtasks> </task> </subtasks> </task> </subtasks> </task> </server-migration-report> Revised on 2024-02-21 14:04:13 UTC | [
"------------- Task Summary ------------- server ............................................................................................................ SUCCESS standalone ....................................................................................................... SUCCESS standalone-configurations ....................................................................................... SUCCESS standalone-configuration(source=/home/username/jboss-eap-8.0/standalone/configuration/standalone-full-ha.xml) .. SUCCESS standalone-configuration(source=/home/username/jboss-eap-8.0/standalone/configuration/standalone-full.xml) ..... SUCCESS standalone-configuration(source=/home/username/jboss-eap-8.0/standalone/configuration/standalone-ha.xml) ....... SUCCESS standalone-configuration(source=/home/username/jboss-eap-8.0/standalone/configuration/standalone-osgi.xml) ..... SUCCESS standalone-configuration(source=/home/username/jboss-eap-8.0/standalone/configuration/standalone.xml) .......... SUCCESS domain ........................................................................................................... SUCCESS domain-configurations ........................................................................................... SUCCESS domain-configuration(source=/home/username/jboss-eap-8.0/domain/configuration/domain.xml) ...................... SUCCESS host-configurations ............................................................................................. SUCCESS host-configuration(source=/home/username/jboss-eap-8.0/domain/configuration/host-master.xml) ................... SUCCESS host-configuration(source=/home/username/jboss-eap-8.0/domain/configuration/host-slave.xml) .................... SUCCESS host-configuration(source=/home/username/jboss-eap-8.0/domain/configuration/host.xml) .......................... SUCCESS -------------------------- Migration Result: SUCCESS --------------------------",
"<?xml version=\"1.0\" ?> <server-migration-report xmlns=\"urn:jboss:server-migration:1.0\" start-time=\"Mon, 30 Oct 2023 16:13:30 UTC\"> <servers> <source name=\"EAP\" version=\"7.4.0.GA\" base-dir=\"/home/username/tools/jboss-eap-7.4\"/> <target name=\"JBoss EAP\" version=\"8.0.0.GA\" base-dir=\"/home/username/tools/jboss-eap-8.0\"/> </servers> <environment> <property name=\"baseDir\" value=\"/home/username/tools/jboss-eap-8.0/migration\"/> <property name=\"deployments.migrate-deployment-scanner-deployments.processedDeploymentScannerDirs\" value=\"/home/username/tools/jboss-eap-7.4/standalone/deployments\"/> <property name=\"report.html.fileName\" value=\"migration-report.html\"/> <property name=\"report.html.maxTaskPathSizeToDisplaySubtasks\" value=\"4\"/> <property name=\"report.html.templateFileName\" value=\"migration-report-template.html\"/> <property name=\"report.summary.maxTaskPathSizeToDisplaySubtasks\" value=\"3\"/> <property name=\"report.xml.fileName\" value=\"migration-report.xml\"/> <property name=\"subsystem.ee.update.setup-javaee7-default-bindings.defaultDataSourceName\" value=\"ExampleDS\"/> <property name=\"subsystem.ee.update.setup-javaee7-default-bindings.defaultJmsConnectionFactoryName\" value=\"hornetq-ra\"/> <property name=\"subsystem.logging.update.remove-console-handler.skip\" value=\"true\"/> </environment> <task number=\"1\" name=\"server\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#1\"/> <result status=\"SUCCESS\"/> <subtasks> <task number=\"2\" name=\"modules.migrate-modules-requested-by-user\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#2\"/> <result status=\"SKIPPED\"/> </task> <task number=\"3\" name=\"standalone\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#3\"/> <result status=\"SUCCESS\"/> <subtasks> <task number=\"4\" name=\"contents.standalone.migrate-content-dir\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#4\"/> <result status=\"SKIPPED\"/> </task> <task number=\"5\" name=\"standalone-configurations\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#5\"/> <result status=\"SUCCESS\"/> <subtasks> <task number=\"6\" name=\"standalone-configuration(source=/home/username/tools/jboss-eap-6.4/standalone/configuration/standalone-full-ha.xml)\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#6\"/> <result status=\"SUCCESS\"/> <subtasks> <task number=\"7\" name=\"subsystems.remove-unsupported-subsystems\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#7\"/> <result status=\"SUCCESS\"/> <subtasks> <task number=\"8\" name=\"subsystems.remove-unsupported-subsystems.remove-unsupported-extension(module=org.jboss.as.cmp)\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#8\"/> <result status=\"SUCCESS\"/> </task> </substasks> </task> <task number=\"644\" name=\"hosts\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#644\"/> <result status=\"SUCCESS\"/> <subtasks> <task number=\"645\" name=\"host(name=master)\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#645\"/> <result status=\"SUCCESS\"/> <subtasks> <task number=\"661\" name=\"security-realms.migrate-properties\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#661\"/> <result status=\"SUCCESS\"/> <subtasks> <task number=\"662\" name=\"security-realm.ManagementRealm.migrate-properties\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#662\"/> <result status=\"SUCCESS\"/> </task> <task number=\"663\" name=\"security-realm.ApplicationRealm.migrate-properties\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#663\"/> <result status=\"SUCCESS\"/> </task> </subtasks> </task> <task number=\"664\" name=\"security-realm.ApplicationRealm.add-ssl-server-identity\"> <logger logger=\"org.jboss.migration.core.task.ServerMigrationTask#664\"/> <result status=\"SUCCESS\"/> </task> </subtasks> </task> </subtasks> </task> </subtasks> </task> </subtasks> </task> </subtasks> </task> </subtasks> </task> </server-migration-report>"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_the_jboss_server_migration_tool/assembly_reference-info-server-migration-tool_server-migration-tool |
Appendix B. RHEL 9 repositories | Appendix B. RHEL 9 repositories If your system is registered to the Red Hat Content Delivery Network (CDN) using the Red Hat Subscription Manager (RHSM), RHEL 9 repositories are automatically enabled during the in-place upgrade. However, on systems registered to Red Hat Satellite using RHSM, you must manually enable and synchronize both RHEL 8 and RHEL 9 repositories before running the pre-upgrade report. Note Make sure to enable the target OS version of each repository, for example 9.4. If you have enabled only the RHEL 9 version of the repositories, the in-place upgrade is inhibited. If you plan to use Red Hat Satellite during the upgrade, you must enable and synchronize at least the following RHEL 9 repositories before the upgrade using either the Satellite web UI or the hammer repository-set enable and hammer product synchronize commands: Table B.1. RHEL 9 repositories Architecture Repository Repository ID Repository name Release version 64-bit Intel and AMD BaseOS rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) x86_64 <target_os_version> AppStream rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) x86_64 <target_os_version> 64-bit ARM BaseOS rhel-9-for-aarch64-baseos-rpms Red Hat Enterprise Linux 9 for ARM 64 - BaseOS (RPMs) aarch64 <target_os_version> AppStream rhel-9-for-aarch64-appstream-rpms Red Hat Enterprise Linux 9 for ARM 64 - AppStream (RPMs) aarch64 <target_os_version> IBM Power (little endian) BaseOS rhel-9-for-ppc64le-baseos-rpms Red Hat Enterprise Linux 9 for Power, little endian - BaseOS (RPMs) ppc64le <target_os_version> AppStream rhel-9-for-ppc64le-appstream-rpms Red Hat Enterprise Linux 9 for Power, little endian - AppStream (RPMs) ppc64le <target_os_version> IBM Z BaseOS rhel-9-for-s390x-baseos-rpms Red Hat Enterprise Linux 9 for IBM z Systems - BaseOS (RPMs) s390x <target_os_version> AppStream rhel-9-for-s390x-appstream-rpms Red Hat Enterprise Linux 9 for IBM z Systems - AppStream (RPMs) s390x <target_os_version> Replace <target_os_version> with the target OS version, for example 9.4 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/upgrading_from_rhel_8_to_rhel_9/appendix_rhel-9-repositories_upgrading-from-rhel-8-to-rhel-9 |
Chapter 7. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE in a restricted network | Chapter 7. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE in a restricted network In OpenShift Container Platform version 4.15, you can install a cluster in a logical partition (LPAR) on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry for installation in a restricted network and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 7.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 7.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 7.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 7.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.4.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.15 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Important When running OpenShift Container Platform on IBM Z(R) without a hypervisor use the Dynamic Partition Manager (DPM) to manage your machine. Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements Five logical partitions (LPARs) Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z(R) in an LPAR, you need: A direct-attached OSA or RoCE network adapter For a preferred setup, use OSA link aggregation. Disk storage FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine Additional resources Processors Resource/Systems Manager Planning Guide in IBM(R) Documentation for PR/SM mode considerations. IBM Dynamic Partition Manager (DPM) Guide in IBM(R) Documentation for DPM mode considerations. Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 7.4.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets that are attached to a node directly as a device. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Three LPARs for OpenShift Container Platform control plane machines. At least six LPARs for OpenShift Container Platform compute machines. One machine or LPAR for the temporary OpenShift Container Platform bootstrap machine. IBM Z network connectivity requirements To install on IBM Z(R) in an LPAR, you need: A direct-attached OSA or RoCE network adapter For a preferred setup, use OSA link aggregation. Disk storage FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 7.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 7.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 7.4.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 7.4.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 7.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 7.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 7.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 7.4.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 7.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 7.4.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 7.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 7.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 7.4.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 7.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 7.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 7.4.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 7.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 7.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 7.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 7.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 7.8.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 7.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 7.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 7.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 7.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 7.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 7.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 7.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 7.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 7.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 7.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 7.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 7.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 7.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 7.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 7.11. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 5 1 For installations on DASD-type disks, add coreos.inst.install_dev=/dev/dasda . Omit this value for FCP-type disks. 2 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 3 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 4 For installations on FCP-type disks, add zfcp.allow_lun_scan=0 . Omit this value for DASD-type disks. 5 For installations on DASD-type disks, replace with rd.dasd=0.0.3490 to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 7.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) in an LPAR. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS guest machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to the LPAR, for example with FTP. For details about how to transfer the files with FTP and boot, see Installing in an LPAR . Boot the machine Repeat this procedure for the other machines in the cluster. 7.12.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 7.12.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 7.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 7.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 7.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 7.16.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 7.16.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 7.16.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 7.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 7.18. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 5",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installing-restricted-networks-ibm-z-lpar |
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC | Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC In OpenShift Container Platform version 4.18, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 5.2. About using a custom VPC In OpenShift Container Platform 4.18, you can deploy a cluster using an existing IBM(R) Virtual Private Cloud (VPC). Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 5.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 5.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 5.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 5.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select powervs as the platform to target. Select the region to deploy the cluster to. Select the zone to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 5.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 zone: powervs-zone serviceInstanceGUID: "powervs-region-service-instance-guid" credentialsMode: Manual publish: External 12 pullSecret: '{"auths": ...}' 13 fips: false sshKey: ssh-ed25519 AAAA... 14 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin for installation. The supported value is OVNKubernetes . 11 Specify the name of an existing VPC. 12 Specify how to publish the user-facing endpoints of your cluster. 13 Required. The installation program prompts you for this value. 14 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 5.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 5.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 5.13. steps Customize your cluster Optional: Opt out of remote health reporting | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" credentialsMode: Manual publish: External 12 pullSecret: '{\"auths\": ...}' 13 fips: false sshKey: ssh-ed25519 AAAA... 14",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_power_virtual_server/installing-ibm-powervs-vpc |
Chapter 10. Configuring reCAPTCHA for 3scale API Management | Chapter 10. Configuring reCAPTCHA for 3scale API Management This document describes how to configure reCAPTCHA for Red Hat 3scale API Management On-premises to protect against spam. Prerequisites An installed and configured 3scale On-premises instance on a supported OpenShift version . Get a site key and the secret key for reCAPTCHA v2. See the Register a new site web page. Add the Developer Portal domain to an allowlist if you want to use domain name validation. To configure reCAPTCHA for 3scale, perform the steps outlined in the following procedure: Section 10.1, "Configuring reCAPTCHA for spam protection in 3scale API Management" 10.1. Configuring reCAPTCHA for spam protection in 3scale API Management To configure reCAPTCHA for spam protection, you have two options to patch the secret file that contains the reCAPTCHA. These options are in the OpenShift Container Platform (OCP) user interface or using the command line interface (CLI). Procedure OCP 4.x: Navigate to Project: [Your_project_name] > Workloads > Secrets . Edit the system-recaptcha secret file. The PRIVATE_KEY and PUBLIC_KEY from the reCAPTHCA service must be in base64 format encoding. Transform the keys to base64 encoding manually. Note The CLI reCAPTCHA option does not require base64 format encoding. CLI: Type the following command: USD oc patch secret/system-recaptcha -p '{"stringData": {"PUBLIC_KEY": "public-key-from-service", "PRIVATE_KEY": "private-key-from-service"}}' Post-procedure steps Redeploy the system pod after you have completed one of the above options. In the 3scale Admin Portal, turn on spam protection against users that are not signed: Navigate to Audience > Developer Portal > Spam Protection . Select one of the following options: Always reCAPTCHA will always appear when a form is presented to a user who is not logged in. Suspicious only reCAPTCHA is only shown if the automated checks detect a possible spammer. Never Turns off Spam protection. After system-app has redeployed, the pages that use spam protection on the Developer Portal will show the reCAPTCHA I'm not a robot checkbox. Additional resources See ReCAPTCHA home page for more information, guides, and support. | [
"oc patch secret/system-recaptcha -p '{\"stringData\": {\"PUBLIC_KEY\": \"public-key-from-service\", \"PRIVATE_KEY\": \"private-key-from-service\"}}'"
]
| https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/operating_red_hat_3scale_api_management/configuring-recaptcha-for-threescale |
Chapter 27. File Systems | Chapter 27. File Systems Setting the retry timeout can now prevent autofs from starting without mounts from SSSD When starting the autofs utility, the sss map source was previously sometimes not ready to provide map information, but sss did not return an appropriate error to distinguish between the map does not exist and not available condition. As a consequence, automounting did not work correctly, and autofs started without mounts from SSSD. To fix this bug, autofs retries asking SSSD for the master map when the map does not exist error occurs for a configurable amount of time. Now, you can set the retry timeout to a suitable value so that the master map is read and autofs starts as expected. (BZ# 1101782 ) The autofs package now contains the README.autofs-schema file and an updated schema The samples/autofs.schema distribution file was out of date and incorrect. As a consequence, it is possible that somebody is using an incorrect LDAP schema. However, a change of the schema in use cannot be enforced. With this update: The README.autofs-schema file has been added to describe the problem and recommend which schema to use, if possible. The schema included in the autofs package has been updated to samples/autofs.schema.new . (BZ# 1383910 ) automount no longer needs to be restarted to access maps stored on the NIS server Previously, the autofs utility did not wait for the NIS client service when starting. As a consequence, if the network map source was not available at program start, the master map could not be read, and the automount service had to be restarted to access maps stored on the NIS server. With this update, autofs waits until the master map is available to obtain a startup map. As a result, automount can access the map from the NIS domain, and autofs no longer needs to be restarted on every boot. If the NIS maps are still not available after the configured wait time, the autofs configuration master_wait option might need to be increased. In the majority of cases, the wait time used by the package is sufficient. (BZ#1383194) Checking local mount availability with autofs no longer leads to a lengthy timeout before failing Previously, a server availability probe was not done for mount requests that autofs considered local because a bind mount on the local machine is expected to be available for use. If the bind mount failed, an NFS mount on the local machine was then tried. However, if the NFS server was not running on the local machine, the mount attempt sometimes suffered a lengthy timeout before failing. An availability probe has been added to the case where a bind mount is first tried, but fails, and autofs now falls back to trying to use an NFS server on the local machine. As a result, if a bind mount on the local machine fails, the fallback to trying an NFS mount on the local machine fails quickly if the local NFS server is not running. (BZ# 1420574 ) The journal is marked as idle when mounting a GFS2 file system as read-only Previously, the kernel did not mark the file system journal as idle when mounting a GFS2 file system as read-only. As a consequence, the gfs2_log_flush() function incorrectly tried to write a header block to the journal and a sequence-out-of-order error was logged. A patch has been applied to mark the journal idle when mounting a GFS2 file system as read-only. As a result, the mentioned error no longer occurs in the described scenario. (BZ#1213119) The id command no longer shows incorrect UIDs and GIDs When running Red Hat Enterprise Linux on an NFSv4 client connected to an NFSv4 server, the id command showed incorrect UIDs and GIDs after the key expired out of the NFS idmapper keyring. The problem persisted for 5 minutes, until the expired keys were garbage collected, after which the new key was created in the keyring and the id command provided the correct output. With this update, the keyring facility has been fixed, and the id command no longer shows incorrect output under the described circumstances. (BZ#1408330) Labeled NFS is now turned off by default The SELinux labels on a Red hat Enterprise Linux NFS server are not normally visible to NFS clients. Instead, NFS clients see all files labeled as type nfs_t regardless of what label the files have on the server. Since Red Hat Enterprise Linux 7.3, the NFS server has the ability to communicate individual file labels to clients. Sufficiently recent clients, such as recent Fedora clients, see NFS files labeled with the same labels that those files have on the server. This is useful in certain cases, but it can also lead to unexpected access permission problems on recent clients after a server is upgraded to Red Hat Enterprise Linux 7.3 and later. Note that labeled NFS support is turned off by default on the NFS server. You can re-enable labeled NFS support by using the security_label export option. (BZ# 1406885 ) autofs mounts no longer enter an infinite loop after reaching a shutdown state If an autofs mount reached a shutdown state, and a mount request arrived and was processed before the mount-handling thread read the shutdown notification, the mount-handling thread previously exited without cleaning up the autofs mount. As a consequence, the main program never reached its exit condition and entered an infinite loop, as the autofs-managed mount was left mounted. To fix this bug, the exit condition check now takes place after each request is processed, and cleanup operations are now performed if an autofs mount has reached its shutdown state. As a result, the autofs daemon now exits as expected at shutdown. (BZ#1420584) autofs is now more reliable when handling namespaces Previously, the autofs kernel module was unable to check whether the last component of a path was a mount point in the current namespace, only whether it was a mount point in any namespace. Due to this bug, autofs sometimes incorrectly decided whether a mount point cloned into a propagation private namespace was already present. As a consequence, the automount point failed to be mounted and the error message Too many levels of symbolic links was returned. This happened, for example, when a systemd service that used the PrivateTmp option was restarted while an autofs mount was active. With this update, a namespace-aware mounted check has been added in the kernel. As a result, autofs is now more resilient to cases where a mount namespace that includes autofs mounts has been cloned to a propagation private namespace. For more details, see the KBase article at https://access.redhat.com/articles/3104671 . (BZ#1320588) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_file_systems |
Chapter 10. Variables of the postfix role in System Roles | Chapter 10. Variables of the postfix role in System Roles The postfix role variables allow the user to install, configure, and start the postfix Mail Transfer Agent (MTA). The following role variables are defined in this section: postfix_conf : It includes key/value pairs of all the supported postfix configuration parameters. By default, the postfix_conf does not have a value. If your scenario requires removing any existing configuration and apply the desired configuration on top of a clean postfix installation, specify the : replaced option within the postfix_conf dictionary: An example with the : replaced option: postfix_check : It determines if a check has been executed before starting the postfix to verify the configuration changes. The default value is true. For example: postfix_backup : It determines whether a single backup copy of the configuration is created. By default the postfix_backup value is false. To overwrite any backup run the following command: If the postfix_backup value is changed to true , you must also set the postfix_backup_multiple value to false. For example: postfix_backup_multiple : It determines if the role will make a timestamped backup copy of the configuration. To keep multiple backup copies, run the following command: By default the value of postfix_backup_multiple is true. The postfix_backup_multiple:true setting overrides postfix_backup . If you want to use postfix_backup you must set the postfix_backup_multiple:false . postfix_manage_firewall : Integrates the postfix role with the firewall role to manage port access. By default, the variable is set to false . If you want to automatically manage port access from the postfix role, set the variable to true . postfix_manage_selinux : Integrates the postfix role with the selinux role to manage port access. By default, the variable is set to false . If you want to automatically manage port access from the postfix role, set the variable to true . Important The configuration parameters cannot be removed. Before running the postfix role, set the postfix_conf to all the required configuration parameters and use the file module to remove /etc/postfix/main.cf . 10.1. Additional resources /usr/share/doc/rhel-system-roles/postfix/README.md | [
"postfix_conf: relayhost: example.com",
"postfix_conf: relayhost: example.com previous: replaced",
"postfix_check: true",
"*cp /etc/postfix/main.cf /etc/postfix/main.cf.backup*",
"postfix_backup: true postfix_backup_multiple: false",
"*cp /etc/postfix/main.cf /etc/postfix/main.cf.USD(date -Isec)*"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/assembly_postfix-role-variables-in-system-roles_automating-system-administration-by-using-rhel-system-roles |
11.3. Installation Tools | 11.3. Installation Tools IBM Installation Toolkit is an optional tool that speeds up the installation of Linux and is especially helpful for those unfamiliar with Linux. Use the IBM Installation Toolkit for the following actions: [5] Install and configure Linux on a non-virtualized Power Systems server. Install and configure Linux on servers with previously-configured logical partitions (LPARs, also known as virtualized servers). Install IBM service and productivity tools on a new or previously installed Linux system. The IBM service and productivity tools include dynamic logical partition (DLPAR) utilities. Upgrade system firmware level on Power Systems servers. Perform diagnostics or maintenance operations on previously installed systems. Migrate a LAMP server (software stack) and application data from a System x to a System p system. A LAMP server is a bundle of open source software. LAMP is an acronym for Linux, Apache HTTP Server , MySQL relational database, and PHP (Perl or Python) scripting language. Documentation for the IBM Installation Toolkit for PowerLinux is available in the Linux Information Center at http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaan%2Fpowerpack.htm PowerLinux service and productivity tools is an optional set of tools that include hardware service diagnostic aids, productivity tools, and installation aids for Linux operating systems on IBM servers based on POWER7, POWER6, POWER5, and POWER4 technology. Documentation for the service and productivity tools is available in the Linux Information Center at http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaau%2Fliaauraskickoff.htm [5] Parts of this section were previously published at IBM's Linux information for IBM systems resource at http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=%2Fliaay%2Ftools_overview.htm | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch11s03 |
Chapter 11. Red Hat Fuse and Red Hat Decision Manager | Chapter 11. Red Hat Fuse and Red Hat Decision Manager Red Hat Fuse is a distributed, cloud-native integration platform that is part of an agile integration solution. Its distributed approach enables teams to deploy integrated services where required. Fuse has the flexibility to service diverse users, including integration experts, application developers, and business users, each with their own choice of deployment, architecture, and tooling. The API-centric, container-based architecture decouples services so they can be created, extended, and deployed independently. The result is an integration solution that supports collaboration across the enterprise. Red Hat Decision Manager is an open source decision management platform that combines business rules management, complex event processing, Decision Model & Notation (DMN) execution, and Red Hat build of OptaPlanner for solving planning problems. It automates business decisions and makes that logic available to the entire business. Business assets such as rules, decision tables, and DMN models are organized in projects and stored in the Business Central repository. This ensures consistency, transparency, and the ability to audit across the business. Business users can modify business logic without requiring assistance from IT personnel. You can install Red Hat Fuse on the Apache Karaf container platform and then install and configure Red Hat Process Automation Manager in that container. You can also install Red Hat Fuse on a separate instance of Red Hat JBoss Enterprise Application Platform and integrate it with Red Hat Process Automation Manager. The kie-camel module provides integration between Red Hat Fuse and Red Hat Process Automation Manager. Important For the version of Red Hat Fuse that Red Hat Decision Manager 7.13 supports, see Red Hat Decision Manager 7 Supported Configurations . Note You can install Red Hat Fuse on Spring Boot. Red Hat Decision Manager provides no special integration for this scenario. You can use the kie-server-client library in an application running on Red Hat Fuse on Spring Boot to enable communication with Red Hat Decision Manager services running on a KIE Server. For instructions about using the kie-server-client library, see Interacting with Red Hat Decision Manager using KIE APIs . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/fuse-con |
Chapter 46. Red Hat Enterprise Linux System Roles Powered by Ansible | Chapter 46. Red Hat Enterprise Linux System Roles Powered by Ansible The postfix role of Red Hat Enterprise Linux System Roles as a Technology Preview Red Hat Enterprise Linux System Roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. Since Red Hat Enterprise Linux 7.4, the Red Hat Enterprise Linux System Roles packages have been distributed through the Extras channel. For details regarding Red Hat Enterprise Linux System Roles, see https://access.redhat.com/articles/3050101 . Red Hat Enterprise Linux System Roles currently consists of five roles: selinux kdump network timesync postfix The postfix role has been available as a Technology Preview since Red Hat Enterprise Linux 7.4. The remaining roles have been fully supported since Red Hat Enterprise Linux 7.6. (BZ#1439896) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/technology_previews_red_hat_enterprise_linux_system_roles_powered_by_ansible |
Chapter 57. Networking | Chapter 57. Networking Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important: Note that MD5 certificates are highly insecure and Red Hat does not recommend using them. (BZ#1062656) Mellanox PMD in DPDK causes a performance drop when IOMMU is enabled inside the guest When running Mellanox Poll Mode Driver (PMD) in Data Plane Development Kit (DPDK) in the guest, a performance drop is expected if the iommu=pt option is not set. To make Mellanox PMD work properly, I/O memory management unit (IOMMU) needs to be explicitly enabled in the kernel, and use the passthrough mode. For doing that, pass the intel_iommu=on option (for Intel systems) to the kernel command line. In addition, use iommu=pt to have a proper I/O performance. (BZ#1578688) freeradius might fail when upgrading from RHEL 7.3 A new configuration property, correct_escapes , in the /etc/raddb/radiusd.conf file was introduced in the freeradius version distributed since RHEL 7.4. When an administrator sets correct_escapes to true , the new regular expression syntax for backslash escaping is expected. If correct_escapes is set to false , the old syntax is expected where backslashes are also escaped. For backward compatibility reasons, false is the default value. When upgrading, configuration files in the /etc/raddb/ directory are overwritten unless modified by the administrator, so the value of correct_escapes might not always correspond to which type of syntax is used in all the configuration files. As a consequence, authentication with freeradius might fail. To prevent the problem from occurring, after upgrading from freeradius version 3.0.4 (distributed with RHEL 7.3) and earlier, make sure all configuration files in the /etc/raddb/ directory use the new escaping syntax (no double backslash characters can be found) and that the value of correct_escapes in /etc/raddb/radiusd.conf is set to true . For more information and examples, see the solution at https://access.redhat.com/solutions/3241961 . (BZ#1489758) | [
"Environment=OPENSSL_ENABLE_MD5_VERIFY=1"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/known_issues_networking |
A.3. Unsupported Classes and Methods in java.sql | A.3. Unsupported Classes and Methods in java.sql Table A.1. Connection Properties Class name Methods Array Not Supported Blob CallableStatement Clob Connection DatabaseMetaData NClob Not Supported PreparedStatement Ref Not Implemented ResultSet RowId Not Supported Savepoint not Supported SQLData Not Supported SQLInput not Supported SQLOutput Not Supported Statement Struct Not Supported | [
"getBinaryStream(long, long) - throws SQLFeatureNotSupportedException setBinaryStream(long) - - throws SQLFeatureNotSupportedException setBytes - - throws SQLFeatureNotSupportedException truncate(long) - throws SQLFeatureNotSupportedException",
"getObject(int parameterIndex, Map&lt;String, Class&lt;?&gt;&gt; map) - throws SQLFeatureNotSupportedException getRef - throws SQLFeatureNotSupportedException getRowId - throws SQLFeatureNotSupportedException getURL(String parameterName) - throws SQLFeatureNotSupportedException registerOutParameter - ignores registerOutParameter(String parameterName, *) - throws SQLFeatureNotSupportedException setRowId(String parameterName, RowId x) - throws SQLFeatureNotSupportedException setURL(String parameterName, URL val) - throws SQLFeatureNotSupportedException",
"getCharacterStream(long arg0, long arg1) - throws SQLFeatureNotSupportedException setAsciiStream(long arg0) - throws SQLFeatureNotSupportedException setCharacterStream(long arg0) - throws SQLFeatureNotSupportedException setString - throws SQLFeatureNotSupportedException truncate - throws SQLFeatureNotSupportedException",
"createArrayOf - throws SQLFeatureNotSupportedException createBlob - throws SQLFeatureNotSupportedException createClob - throws SQLFeatureNotSupportedException createNClob - throws SQLFeatureNotSupportedException createSQLXML - throws SQLFeatureNotSupportedException createStruct(String typeName, Object[] attributes) - throws SQLFeatureNotSupportedException getClientInfo - throws SQLFeatureNotSupportedException releaseSavepoint - throws SQLFeatureNotSupportedException rollback(Savepoint savepoint) - throws SQLFeatureNotSupportedException setHoldability - throws SQLFeatureNotSupportedException setSavepoint - throws SQLFeatureNotSupportedException setTypeMap - throws SQLFeatureNotSupportedException",
"getAttributes - throws SQLFeatureNotSupportedException getClientInfoProperties - throws SQLFeatureNotSupportedException getFunctionColumns - throws SQLFeatureNotSupportedException getFunctions - throws SQLFeatureNotSupportedException getRowIdLifetime - throws SQLFeatureNotSupportedException",
"setArray - throws SQLFeatureNotSupportedException setRef - throws SQLFeatureNotSupportedException setRowId - throws SQLFeatureNotSupportedException setUnicodeStream - throws SQLFeatureNotSupportedException",
"deleteRow - throws SQLFeatureNotSupportedException getHoldability - throws SQLFeatureNotSupportedException getObject(*, Map&lt;String, Class&lt;?&gt;&gt; map) - throws SQLFeatureNotSupportedException getRef - throws SQLFeatureNotSupportedException getRowId - throws SQLFeatureNotSupportedException getUnicodeStream - throws SQLFeatureNotSupportedException getURL - throws SQLFeatureNotSupportedException insertRow - throws SQLFeatureNotSupportedException moveToInsertRow - throws SQLFeatureNotSupportedException refreshRow - throws SQLFeatureNotSupportedException rowDeleted - throws SQLFeatureNotSupportedException rowInserted - throws SQLFeatureNotSupportedException rowUpdated - throws SQLFeatureNotSupportedException setFetchDirection - throws SQLFeatureNotSupportedException update* - throws SQLFeatureNotSupportedException",
"setCursorName(String)"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/unsupported_classes_and_methods_in_java.sql1 |
Chapter 2. Installing the MTA extension for Visual Studio Code | Chapter 2. Installing the MTA extension for Visual Studio Code You can install the MTA extension for Visual Studio Code (VS Code). Prerequisites The following are the prerequisites for the Migration Toolkit for Applications (MTA) installation: Java Development Kit (JDK) is installed. MTA supports the following JDKs: OpenJDK 11 OpenJDK 17 Oracle JDK 11 Oracle JDK 17 Eclipse TemurinTM JDK 11 Eclipse TemurinTM JDK 17 8 GB RAM macOS installation: the value of maxproc must be 2048 or greater. Procedure Set the environmental variable JAVA_HOME : USD export JAVA_HOME=jdk11 In VS Code, click the Extensions icon on the Activity bar to open the Extensions view. Enter Migration Toolkit for Applications in the Search field. Select the Migration Toolkit for Applications extension and click Install . The MTA extension icon ( ) is displayed on the Activity bar. | [
"export JAVA_HOME=jdk11"
]
| https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/visual_studio_code_extension_guide/installing-vs-code-extension_vsc-extension-guide |
Advanced Overcloud Customization | Advanced Overcloud Customization Red Hat OpenStack Platform 16.0 Methods for configuring advanced features using Red Hat OpenStack Platform director OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/index |
Chapter 45. Real-Time Kernel | Chapter 45. Real-Time Kernel The SCHED_DEADLINE scheduler class as Technology Preview The SCHED_DEADLINE scheduler class for the real-time kernel, which was introduced in Red Hat Enterprise Linux 7.4, continues to be available as a Technology Preview. The scheduler enables predictable task scheduling based on application deadlines. SCHED_DEADLINE benefits periodic workloads by reducing application timer manipulation. (BZ#1297061) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_real-time_kernel |
21.6. Monitoring Server Activity | 21.6. Monitoring Server Activity See the Monitoring Server Activity section in the Red Hat Directory Server Performance Tuning Guide . | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/monitoring_server_and_database_activity-monitoring_server_activity |
Chapter 6. Backing up and restoring the undercloud and control plane nodes with collocated Ceph monitors | Chapter 6. Backing up and restoring the undercloud and control plane nodes with collocated Ceph monitors If an error occurs during an update or upgrade, you can use ReaR backups to restore either the undercloud or overcloud control plane nodes, or both, to their state. Prerequisites Install and configure ReaR. For more information, see Install and configure ReaR . Prepare the backup node. For more information, see Prepare the backup node . Execute the backup procedure. For more information, see Execute the backup procedure . Procedure On the backup node, export the NFS directory to host the Ceph backups. Replace <IP_ADDRESS/24> with the IP address and subnet mask of the network: On the undercloud node, source the undercloud credentials and run the following script: To verify that the [email protected] container has stopped, enter the following command: On the undercloud node, source the undercloud credentials and run the following script. Replace <BACKUP_NODE_IP_ADDRESS> with the IP address of the backup node: On the node that you want to restore, complete the following tasks: Power off the node before you proceed. Restore the node with the ReaR backup file that you have created during the backup process. The file is located in the /ceph_backups directory of the backup node. From the Relax-and-Recover boot menu, select Recover <CONTROL_PLANE_NODE> , where <CONTROL_PLANE_NODE> is the name of the control plane node. At the prompt, enter the following command: When the image restoration process completes, the console displays the following message: For the node that you want to restore, copy the Ceph backup from the /ceph_backups directory into the /var/lib/ceph directory: Identify the system mount points: The /dev/vda2 file system is mounted on /mnt/local . Create a temporary directory: On the control plane node, remove the existing /var/lib/ceph directory: Restore the Ceph maps. Replace <CONTROL_PLANE_NODE> with the name of your control plane node: Verify that the files are restored: Power off the node: Power on the node. The node resumes its state. | [
"cat >> /etc/exports << EOF /ceph_backups <IP_ADDRESS/24>(rw,sync,no_root_squash,no_subtree_check) EOF",
"source stackrc",
"#! /bin/bash for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print USD2}' | awk -F' ' '{print USD1}'`; do ssh -q heat-admin@USDi 'sudo systemctl stop ceph-mon@USD(hostname -s) ceph-mgr@USD(hostname -s)'; done",
"sudo podman ps | grep ceph",
"source stackrc",
"#! /bin/bash for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print USD2}' | awk -F' ' '{print USD1}'`; do ssh -q heat-admin@USDi 'sudo mkdir /ceph_backups'; done #! /bin/bash for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print USD2}' | awk -F' ' '{print USD1}'`; do ssh -q heat-admin@USDi 'sudo mount -t nfs <BACKUP_NODE_IP_ADDRESS>:/ceph_backups /ceph_backups'; done #! /bin/bash for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print USD2}' | awk -F' ' '{print USD1}'`; do ssh -q heat-admin@USDi 'sudo mkdir /ceph_backups/USD(hostname -s)'; done #! /bin/bash for i in `openstack server list -c Name -c Networks -f value | grep controller | awk -F'=' '{print USD2}' | awk -F' ' '{print USD1}'`; do ssh -q heat-admin@USDi 'sudo tar -zcv --xattrs-include=*.* --xattrs --xattrs-include=security.capability --xattrs-include=security.selinux --acls -f /ceph_backups/USD(hostname -s)/USD(hostname -s).tar.gz /var/lib/ceph'; done",
"RESCUE <CONTROL_PLANE_NODE> :~ # rear recover",
"Finished recovering your system Exiting rear recover Running exit tasks",
"RESCUE <CONTROL_PLANE_NODE>:~# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm tmpfs 16G 8.4M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda2 30G 13G 18G 41% /mnt/local",
"RESCUE <CONTROL_PLANE_NODE>:~ # mkdir /tmp/restore RESCUE <CONTROL_PLANE_NODE>:~ # mount -v -t nfs -o rw,noatime <BACKUP_NODE_IP_ADDRESS>:/ceph_backups /tmp/restore/",
"RESCUE <CONTROL_PLANE_NODE>:~ # rm -rf /mnt/local/var/lib/ceph/*",
"RESCUE <CONTROL_PLANE_NODE>:~ # tar -xvC /mnt/local/ -f /tmp/restore/<CONTROL_PLANE_NODE>/<CONTROL_PLANE_NODE>.tar.gz --xattrs --xattrs-include='*.*' var/lib/ceph",
"RESCUE <CONTROL_PLANE_NODE>:~ # ls -l total 0 drwxr-xr-x 2 root 107 26 Jun 18 18:52 bootstrap-mds drwxr-xr-x 2 root 107 26 Jun 18 18:52 bootstrap-osd drwxr-xr-x 2 root 107 26 Jun 18 18:52 bootstrap-rbd drwxr-xr-x 2 root 107 26 Jun 18 18:52 bootstrap-rgw drwxr-xr-x 3 root 107 31 Jun 18 18:52 mds drwxr-xr-x 3 root 107 31 Jun 18 18:52 mgr drwxr-xr-x 3 root 107 31 Jun 18 18:52 mon drwxr-xr-x 2 root 107 6 Jun 18 18:52 osd drwxr-xr-x 3 root 107 35 Jun 18 18:52 radosgw drwxr-xr-x 2 root 107 6 Jun 18 18:52 tmp",
"RESCUE <CONTROL_PLANE_NODE> :~ # poweroff"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/undercloud_and_control_plane_back_up_and_restore/backup-and-restore-with-collocated-ceph-monitors_osp-ctlplane-br |
Chapter 2. Managing secrets securely using Secrets Store CSI driver with GitOps | Chapter 2. Managing secrets securely using Secrets Store CSI driver with GitOps This guide walks you through the process of integrating the Secrets Store Container Storage Interface (SSCSI) driver with the GitOps Operator in OpenShift Container Platform 4.14 and later. 2.1. Overview of managing secrets using Secrets Store CSI driver with GitOps Some applications need sensitive information, such as passwords and usernames which must be concealed as good security practice. If sensitive information is exposed because role-based access control (RBAC) is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret. Important Anyone who is authorized to create a pod in a namespace can use that RBAC to read any secret in that namespace. With the SSCSI Driver Operator, you can use an external secrets store to store and provide sensitive information to pods securely. The process of integrating the OpenShift Container Platform SSCSI driver with the GitOps Operator consists of the following procedures: Storing AWS Secrets Manager resources in GitOps repository Configuring SSCSI driver to mount secrets from AWS Secrets Manager Configuring GitOps managed resources to use mounted secrets 2.1.1. Benefits Integrating the SSCSI driver with the GitOps Operator provides the following benefits: Enhance the security and efficiency of your GitOps workflows Facilitate the secure attachment of secrets into deployment pods as a volume Ensure that sensitive information is accessed securely and efficiently 2.1.2. Secrets store providers The following secrets store providers are available for use with the Secrets Store CSI Driver Operator: AWS Secrets Manager AWS Systems Manager Parameter Store Microsoft Azure Key Vault As an example, consider that you are using AWS Secrets Manager as your secrets store provider with the SSCSI Driver Operator. The following example shows the directory structure in GitOps repository that is ready to use the secrets from AWS Secrets Manager: Example directory structure in GitOps repository 2 Directory that stores the aws-provider.yaml file. 3 Configuration file that installs the AWS Secrets Manager provider and deploys resources for it. 1 Configuration file that creates an application and deploys resources for AWS Secrets Manager. 4 Directory that stores the deployment pod and credential requests. 5 Directory that stores the SecretProviderClass resources to define your secrets store provider. 6 Folder that stores the credentialsrequest.yaml file. This file contains the configuration for the credentials request to mount a secret to the deployment pod. 2.2. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have extracted and prepared the ccoctl binary. You have installed the jq CLI tool. Your cluster is installed on AWS and uses AWS Security Token Service (STS). You have configured AWS Secrets Manager to store the required secrets. SSCSI Driver Operator is installed on your cluster . Red Hat OpenShift GitOps Operator is installed on your cluster. You have a GitOps repository ready to use the secrets. You are logged in to the Argo CD instance by using the Argo CD admin account. 2.3. Storing AWS Secrets Manager resources in GitOps repository This guide provides instructions with examples to help you use GitOps workflows with the Secrets Store Container Storage Interface (SSCSI) Driver Operator to mount secrets from AWS Secrets Manager to a CSI volume in OpenShift Container Platform. Important Using the SSCSI Driver Operator with AWS Secrets Manager is not supported in a hosted control plane cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have extracted and prepared the ccoctl binary. You have installed the jq CLI tool. Your cluster is installed on AWS and uses AWS Security Token Service (STS). You have configured AWS Secrets Manager to store the required secrets. SSCSI Driver Operator is installed on your cluster . Red Hat OpenShift GitOps Operator is installed on your cluster. You have a GitOps repository ready to use the secrets. You are logged in to the Argo CD instance by using the Argo CD admin account. Procedure Install the AWS Secrets Manager provider and add resources: In your GitOps repository, create a directory and add aws-provider.yaml file in it with the following configuration to deploy resources for the AWS Secrets Manager provider: Important The AWS Secrets Manager provider for the SSCSI driver is an upstream provider. This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality. Example aws-provider.yaml file apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [""] resources: ["serviceaccounts/token"] verbs: ["create"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["get"] - apiGroups: [""] resources: ["pods"] verbs: ["get"] - apiGroups: [""] resources: ["nodes"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: "/etc/kubernetes/secrets-store-csi-providers" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: "/etc/kubernetes/secrets-store-csi-providers" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux Add a secret-provider-app.yaml file in your GitOps repository to create an application and deploy resources for AWS Secrets Manager: Example secret-provider-app.yaml file apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: secret-provider-app namespace: openshift-gitops spec: destination: namespace: openshift-cluster-csi-drivers server: https://kubernetes.default.svc project: default source: path: path/to/aws-provider/resources repoURL: https://github.com/<my-domain>/<gitops>.git 1 syncPolicy: automated: prune: true selfHeal: true 1 Update the value of the repoURL field to point to your GitOps repository. Synchronize resources with the default Argo CD instance to deploy them in the cluster: Add a label to the openshift-cluster-csi-drivers namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it: USD oc label namespace openshift-cluster-csi-drivers argocd.argoproj.io/managed-by=openshift-gitops Apply the resources in your GitOps repository to your cluster, including the aws-provider.yaml file you just pushed: Example output application.argoproj.io/argo-app created application.argoproj.io/secret-provider-app created ... In the Argo CD UI, you can observe that the csi-secrets-store-provider-aws daemonset continues to synchronize resources. To resolve this issue, you must configure the SSCSI driver to mount secrets from the AWS Secrets Manager. 2.4. Configuring SSCSI driver to mount secrets from AWS Secrets Manager To store and manage your secrets securely, use GitOps workflows and configure the Secrets Store Container Storage Interface (SSCSI) Driver Operator to mount secrets from AWS Secrets Manager to a CSI volume in OpenShift Container Platform. For example, consider that you want to mount a secret to a deployment pod under the dev namespace which is in the /environments/dev/ directory. Prerequisites You have the AWS Secrets Manager resources stored in your GitOps repository. Procedure Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command: USD oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers Example output clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "csi-secrets-store-provider-aws" Grant permission to allow the service account to read the AWS secret object: Create a credentialsrequest-dir-aws folder under a namespace-scoped directory in your GitOps repository because the credentials request is namespace-scoped. For example, create a credentialsrequest-dir-aws folder under the dev namespace which is in the /environments/dev/ directory by running the following command: USD mkdir credentialsrequest-dir-aws Create a YAML file with the following configuration for the credentials request in the /environments/dev/credentialsrequest-dir-aws/ path to mount a secret to the deployment pod in the dev namespace: Example credentialsrequest.yaml file apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - "secretsmanager:GetSecretValue" - "secretsmanager:DescribeSecret" effect: Allow resource: "<aws_secret_arn>" 1 secretRef: name: aws-creds namespace: dev 2 serviceAccountNames: - default 2 The namespace for the secret reference. Update the value of this namespace field according to your project deployment setup. 1 The ARN of your secret in the region where your cluster is on. The <aws_region> of <aws_secret_arn> has to match the cluster region. If it does not match, create a replication of your secret in the region where your cluster is on. Tip To find your cluster region, run the command: USD oc get infrastructure cluster -o jsonpath='{.status.platformStatus.aws.region}' Example output us-west-2 Retrieve the OIDC provider by running the following command: USD oc get --raw=/.well-known/openid-configuration | jq -r '.issuer' Example output https://<oidc_provider_name> Copy the OIDC provider name <oidc_provider_name> from the output to use in the step. Use the ccoctl tool to process the credentials request by running the following command: USD ccoctl aws create-iam-roles \ --name my-role --region=<aws_region> \ --credentials-requests-dir=credentialsrequest-dir-aws \ --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output Example output 2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds Copy the <aws_role_arn> from the output to use in the step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds . Check the role policy on AWS to confirm the <aws_region> of "Resource" in the role policy matches the cluster region: Example role policy { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": "arn:aws:secretsmanager:<aws_region>:<aws_account_id>:secret:my-secret-xxxxxx" } ] } Bind the service account with the role ARN by running the following command: USD oc annotate -n <namespace> sa/<app_service_account> eks.amazonaws.com/role-arn="<aws_role_arn>" Example command USD oc annotate -n dev sa/default eks.amazonaws.com/role-arn="<aws_role_arn>" Example output serviceaccount/default annotated Create a namespace-scoped SecretProviderClass resource to define your secrets store provider. For example, you create a SecretProviderClass resource in /environments/dev/apps/app-taxi/services/taxi/base/config directory of your GitOps repository. Create a secret-provider-class-aws.yaml file in the same directory where the target deployment is located in your GitOps repository: Example secret-provider-class-aws.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: dev 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: "testSecret" 5 objectType: "secretsmanager" 1 Name of the secret provider class. 2 Namespace for the secret provider class. The namespace must match the namespace of the resource which will use the secret. 3 Name of the secret store provider. 4 Specifies the provider-specific configuration parameters. 5 The secret name you created in AWS. Verify that after pushing this YAML file to your GitOps repository, the namespace-scoped SecretProviderClass resource is populated in the target application page in the Argo CD UI. Note If the Sync Policy of your application is not set to Auto , you can manually sync the SecretProviderClass resource by clicking Sync in the Argo CD UI. 2.5. Configuring GitOps managed resources to use mounted secrets You must configure the GitOps managed resources by adding volume mounts configuration to a deployment and configuring the container pod to use the mounted secret. Prerequisites You have the AWS Secrets Manager resources stored in your GitOps repository. You have the Secrets Store Container Storage Interface (SSCSI) driver configured to mount secrets from AWS Secrets Manager. Procedure Configure the GitOps managed resources. For example, consider that you want to add volume mounts configuration to the deployment of app-taxi application and the 100-deployment.yaml file is in the /environments/dev/apps/app-taxi/services/taxi/base/config/ directory. Add the volume mounting to the deployment YAML file and configure the container pod to use the secret provider class resources and mounted secret: Example YAML file apiVersion: apps/v1 kind: Deployment metadata: name: taxi namespace: dev 1 spec: replicas: 1 template: metadata: # ... spec: containers: - image: nginxinc/nginx-unprivileged:latest imagePullPolicy: Always name: taxi ports: - containerPort: 8080 volumeMounts: - name: secrets-store-inline mountPath: "/mnt/secrets-store" 2 readOnly: true resources: {} serviceAccountName: default volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: "my-aws-provider" 3 status: {} # ... 1 Namespace for the deployment. This must be the same namespace as the secret provider class. 2 The path to mount secrets in the volume mount. 3 Name of the secret provider class. Push the updated resource YAML file to your GitOps repository. In the Argo CD UI, click REFRESH on the target application page to apply the updated deployment manifest. Verify that all the resources are successfully synchronized on the target application page. Verify that you can you can access the secrets from AWS Secrets manager in the pod volume mount: List the secrets in the pod mount: USD oc exec <deployment_name>-<hash> -n <namespace> -- ls /mnt/secrets-store/ Example command USD oc exec taxi-5959644f9-t847m -n dev -- ls /mnt/secrets-store/ Example output <secret_name> View a secret in the pod mount: USD oc exec <deployment_name>-<hash> -n <namespace> -- cat /mnt/secrets-store/<secret_name> Example command USD oc exec taxi-5959644f9-t847m -n dev -- cat /mnt/secrets-store/testSecret Example output <secret_value> 2.6. Additional resources Obtaining the ccoctl tool About the Cloud Credential Operator Determining the Cloud Credential Operator mode Configure your AWS cluster to use AWS STS Configuring AWS Secrets Manager to store the required secrets About the Secrets Store CSI Driver Operator Mounting secrets from an external secrets store to a CSI volume | [
"├── config │ ├── argocd │ │ ├── argo-app.yaml │ │ ├── secret-provider-app.yaml 1 │ │ ├── │ └── sscsid 2 │ └── aws-provider.yaml 3 ├── environments │ ├── dev 4 │ │ ├── apps │ │ │ └── app-taxi 5 │ │ │ ├── │ │ ├── credentialsrequest-dir-aws 6 │ │ └── env │ │ ├── │ ├── new-env │ │ ├──",
"apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux",
"apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: secret-provider-app namespace: openshift-gitops spec: destination: namespace: openshift-cluster-csi-drivers server: https://kubernetes.default.svc project: default source: path: path/to/aws-provider/resources repoURL: https://github.com/<my-domain>/<gitops>.git 1 syncPolicy: automated: prune: true selfHeal: true",
"oc label namespace openshift-cluster-csi-drivers argocd.argoproj.io/managed-by=openshift-gitops",
"application.argoproj.io/argo-app created application.argoproj.io/secret-provider-app created",
"oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers",
"clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: \"csi-secrets-store-provider-aws\"",
"mkdir credentialsrequest-dir-aws",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"<aws_secret_arn>\" 1 secretRef: name: aws-creds namespace: dev 2 serviceAccountNames: - default",
"oc get infrastructure cluster -o jsonpath='{.status.platformStatus.aws.region}'",
"us-west-2",
"oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'",
"https://<oidc_provider_name>",
"ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output",
"2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\", \"secretsmanager:DescribeSecret\" ], \"Resource\": \"arn:aws:secretsmanager:<aws_region>:<aws_account_id>:secret:my-secret-xxxxxx\" } ] }",
"oc annotate -n <namespace> sa/<app_service_account> eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"oc annotate -n dev sa/default eks.amazonaws.com/role-arn=\"<aws_role_arn>\"",
"serviceaccount/default annotated",
"apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: dev 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" 5 objectType: \"secretsmanager\"",
"apiVersion: apps/v1 kind: Deployment metadata: name: taxi namespace: dev 1 spec: replicas: 1 template: metadata: spec: containers: - image: nginxinc/nginx-unprivileged:latest imagePullPolicy: Always name: taxi ports: - containerPort: 8080 volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" 2 readOnly: true resources: {} serviceAccountName: default volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3 status: {}",
"oc exec <deployment_name>-<hash> -n <namespace> -- ls /mnt/secrets-store/",
"oc exec taxi-5959644f9-t847m -n dev -- ls /mnt/secrets-store/",
"<secret_name>",
"oc exec <deployment_name>-<hash> -n <namespace> -- cat /mnt/secrets-store/<secret_name>",
"oc exec taxi-5959644f9-t847m -n dev -- cat /mnt/secrets-store/testSecret",
"<secret_value>"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/security/managing-secrets-securely-using-sscsid-with-gitops |
Chapter 6. Upgrading Identity Management | Chapter 6. Upgrading Identity Management Identity Management is generally updated whenever a system is upgraded to a new release. Upgrades should be transparent and do not require any user or administrative intervention. 6.1. Upgrade Notes Important Due to CVE-2014-3566 , the Secure Socket Layer version 3 (SSLv3) protocol needs to be disabled in the mod_nss module. You can ensure that by following these steps: Edit the /etc/httpd/conf.d/nss.conf file and set the NSSProtocol parameter to TLSv1.0 (for backward compatibility) and TLSv1.1 . Restart the httpd service. The update process automatically updates all schema and LDAP configuration, Apache configuration, and other services configuration, and restarts all IdM-associated services. When a replica is created, it must be the same version as the master it is based on. This means that replicas should not be created on an older bersion of Identity Management while the servers are in the process of being upgraded. Wait until the upgrade process is completed, and then create new replicas. Schema changes are replicated between servers. So once one master server is updated, all servers and replicas will have the updated schema, even if their packages are not yet updated. This ensures that any new entries which use the new schema can still be replicated among all the servers in the IdM domain. The LDAP upgrade operation is logged in the upgrade log at /var/log/ipaupgrade-log . If any LDAP errors occur, then they are recorded in that log. Once any errors are resolved, the LDAP update process can be manually initiated by running the updater script: Clients do not need to have new packages installed. The client packages used to configure a Red Hat Enterprise Linux system do not impact the enrollment of the client within the domain. Updating client packages could bring in updated packages for other dependencies, such as certmonger which contain bug fixes, but this is not required to maintain client functionality or behavior within the IdM domain. | [
"NSSProtocol TLSv1.0,TLSv1.1",
"service httpd restart",
"ipa-ldap-updater --upgrade"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/upgrading |
Chapter 8. Installing and configuring for ECC | Chapter 8. Installing and configuring for ECC This section highlights differences that you would encounter if you want to do an ECC installation, compared to the RSA instructions (chapter 6&7) 8.1. Prerequisites for ECC installation Prepare your systems in a similar manner to the procedure described in Chapter 6, Prerequisites for installation , making sure you adapt paths, names, and other configuration for ECC. For example, we will install the following instances: rhcs10-ECC-RootCA rhcs10-ECC-SubCA rhcs10-ECC-OCSP-rootca rhcs10-ECC-OCSP-subca rhcs10-ECC-KRA Note Please note that ECC is not supported for TMS (TPS and TKS). Create directories for storing pki files For example, on rhcs10.example.com: Setup the firewall ports for ECC Please refer to the table in Section 6.8, "Adding ports to the firewall and with SELinux context" for ports used by ECC. You can use the following command to open the ports: Then reload the firewall in order to apply the newly opened ports: Setup SELinux contexts For Red Hat Certificate System ports: For DS ports (replace the port type option http_port_t with ldap_port_t ): Install RHDS instances Install Red Hat Directory Server instances, e.g.: CC-ECC-RootCA-LDAP (LDAP ports: 1389/1636) CC-ECC-SubCA-LDAP (LDAP ports: 8389/8636) CC-ECC-OCSP-rootca-LDAP (LDAP ports: 2389/2636) CC-ECC-OCSP-subca-LDAP (LDAP ports: 9389/9636) CC-ECC-KRA-LDAP (LDAP ports: 4389/4636) Note Please note that ECC is not supported for TMS (TPS and TKS). You can use the example script below to install the DS instances. For example for the ECC RootCA: Testing CRL publishing Make sure you use the ECC algorithm in the commands. For example: Note a ec -c nistp256 in the above command. 8.2. Installing ECC RHCS instances Please follow the example installation procedure described in Chapter 7, Installing and configuring Red Hat Certificate System , but make sure you adapt for ECC as relevant. We provide the following reference pkispawn files for an ECC installation: Section 8.2.1, "RootCA" Section 8.2.2, "OCSP (RootCA)" Section 8.2.3, "SubCA" Section 8.2.4, "OCSP (SubCA)" Section 8.2.5, "KRA" 8.2.1. RootCA Please refer to Section 7.3, "Create and configure the RootCA (Part I)" for the example installation procedure and adapt for an ECC installation. Important Once you have installed the RootCA, you will need to Section 8.2.2, "OCSP (RootCA)" . This is so that the role user certificates and the TLS server certificate of the RootCA will bear AIA extensions pointing to the OCSP instance. You can then finish configuring the RootCA by following Section 7.5, "Create and configure the RootCA (Part II)" . 8.2.2. OCSP (RootCA) Please refer to Section 7.4, "Create and configure the OCSP instance (RootCA)" for the example installation procedure and adapt for an ECC installation. Important Once you are done installing the RootCA's OCSP, do not forget to proceed with the Section 7.5, "Create and configure the RootCA (Part II)" . 8.2.3. SubCA Please refer to Section 7.6, "Create and configure the SubCA (Part I)" for the example installation procedure and adapt for an ECC installation. IMPORTANT Once you have installed the SubCA, you will need to Section 8.2.4, "OCSP (SubCA)" . This is so that the role user certificates and the TLS server certificate of the SubCA will bear AIA extensions pointing to the OCSP instance. You can then finish configuring the SubCA by following Section 7.8, "Create and configure the SubCA (Part II)" . 8.2.4. OCSP (SubCA) Please refer to Section 7.7, "Create and configure the OCSP instance (SubCA)" for the example installation procedure and adapt for an ECC installation. Important Once you are done installing the SubCA's OCSP, do not forget to proceed with the Section 7.8, "Create and configure the SubCA (Part II)" . 8.2.5. KRA Please refer to Section 7.9, "Create and configure the KRA instance" for the example installation procedure and adapt for an ECC installation. 8.3. Post-installation for ECC Please follow the post-installation configuration described in Section 7.13, "Post-installation" , but when reaching Section 7.13.11, "Update the ciphers list" make sure you apply the following ECC-specific parameters instead. Configure all your CS instances as relevant based on their role. Configuring ECC ciphers for CS instances: When a CS instance is acting as a server, add the following ciphers to the SSLHostConfig element in the server.xml file: When a CS instance is acting as a client to its internal LDAP database, add the following line to the <instance directory> / <instance type> /conf/CS.cfg file: When a CA instance is acting as a client to the KRA, add the following line to the <instance directory> /ca/conf/CS.cfg file: Once you have configured all your CS instances, restart them in order to apply the new ciphers. Configuring ECC ciphers for DS instances: By default, a Directory Server instance inherits the ciphers enabled on the OS. You can verify the enabled ciphers using the following command (here, for the SubCA's DS instance): If you wish to set the cipher list to match the ciphers of Certificate System (here, for the SubCA's DS instance): Do the same for all other DS instances, then restart the DS instances to apply the ciphers. | [
"mkdir -p /root/pki_ecc",
"firewall-cmd --permanent --add-port={20080/tcp,20443/tcp,1389/tcp,1636/tcp,20009/tcp,20005/tcp,21080/tcp,21443/tcp,8389/tcp,8636/tcp,21009/tcp,21005/tcp,23080/tcp,23443/tcp,2389/tcp,2636/tcp,23009/tcp,23005/tcp,13389/tcp,13636/tcp,22080/tcp,22443/tcp,9389/tcp,9636/tcp,22009/tcp,22005/tcp,14389/tcp,14636/tcp,23009/tcp,23005/tcp,34080/tcp,34443/tcp,4389/tcp,4636/tcp}",
"firewall-cmd --reload",
"for port in 20080 20443 21080 21443 34080 34443 22080 22443 23080 2343; do semanage port -a -t http_port_t -p tcp USDport; done",
"for port in 1389 1636 8389 8636 2389 2636 13389 13636 9389 9636 14389 14636 4389 4636; do semanage port -a -t ldap_port_t -p tcp USDport; done",
"echo \"Setting up ENV VARIABLES\" export BASEDN='dc=example,dc=com' export PORT=1389 export INSTANCE_NAME=CC-ECC-RootCA-LDAP export SECURE_PORT=1636 export PASSWORD=SECret.123 echo \"Running dscreate create-template...\" dscreate create-template | sed -e 's/;suffix =/suffix = 'USDBASEDN'/' -e 's/;instance_name = localhost/instance_name ='USDINSTANCE_NAME'/' -e 's/;port = 389/port = 'USDPORT'/' -e 's/;secure_port = 636/secure_port = 'USDSECURE_PORT'/' -e 's/;full_machine_name =/full_machine_name =/' -e 's/;create_suffix_entry = False/create_suffix_entry = True/' -e 's/;root_password = Directory_Manager_Password/root_password = 'USDPASSWORD'/' -e 's/;self_sign_cert = True/self_sign_cert = True/' > /root/pki_ecc/rootca-ldap.cfg; dscreate from-file /root/pki_ecc/rootca-ldap.cfg",
"PKCS10Client -d /root/.dogtag/pki_ecc_bootstrap/certs_db -p SECret.123 -a ec -c nistp256 -n \"cn=test user1, uid=user1\" -o /root/.dogtag/pki_ecc_bootstrap/certs_db/user1.req",
"[DEFAULT] pki_instance_name=rhcs10-ECC-RootCA pki_https_port=20443 pki_http_port=20080 ### Crypto Token pki_hsm_enable=True pki_hsm_libfile=/opt/nfast/toolkits/pkcs11/libcknfast.so pki_hsm_modulename=nfast pki_token_name=NHSM-CONN-XC pki_token_password=<YourHSMpassword> pki_audit_signing_token=NHSM-CONN-XC pki_audit_signing_key_algorithm=SHA512withEC pki_audit_signing_key_size=nistp521 pki_audit_signing_key_type=ecc pki_audit_signing_signing_algorithm=SHA512withEC pki_subsystem_token=NHSM-CONN-XC pki_subsystem_key_algorithm=SHA512withEC pki_subsystem_signing_algorithm=SHA256withEC pki_subsystem_key_size=nistp521 pki_subsystem_key_type=ecc pki_sslserver_token=NHSM-CONN-XC pki_sslserver_key_algorithm=SHA512withEC pki_sslserver_signing_algorithm=SHA512withEC pki_sslserver_key_size=nistp521 pki_sslserver_key_type=ecc ### Bootstrap Admin pki_admin_password=SECret.123 pki_admin_key_type=ecc pki_admin_key_size=nistp521 pki_admin_key_algorithm=SHA512withEC ### Bootstrap Admin client dir ### by default, if pki_client_dir, pki_client_database_dir, ### and pki_client_admin_cert_p12 are not specified, items will be placed ### under some default directories in /root/.dogtag pki_client_admin_cert_p12=/opt/pki_ecc/rhcs10-ECC-RootCA/ca_admin_cert.p12 pki_client_database_dir=/opt/pki_ecc/rhcs10-ECC-RootCA/certs_db pki_client_database_password=SECret.123 pki_client_dir=/opt/pki_ecc/rhcs10-ECC-RootCA pki_client_pkcs12_password=SECret.123 ### Internal LDAP pki_ds_bind_dn=cn=Directory Manager pki_ds_ldap_port=1389 pki_ds_ldaps_port=1636 pki_ds_password=SECret.123 pki_ds_remove_data=True pki_ds_secure_connection=True pki_ds_secure_connection_ca_pem_file=/opt/pki_ecc/temp-dirsrv-rootca-cert.pem pki_ds_secure_connection_ca_nickname=DS temp CA certificate ### Security Domain pki_security_domain_hostname=rhcs10.example.com pki_security_domain_name=Example-rhcs10-ECC-RootCA pki_security_domain_password=SECret.123 [Tomcat] pki_ajp_port=20009 pki_tomcat_server_port=20005 [CA] pki_import_admin_cert=False pki_admin_nickname=PKI Bootstrap Administrator for ECC-RootCA pki_admin_name=caadmin pki_admin_uid=caadmin [email protected] pki_ca_signing_token=NHSM-CONN-XC pki_ca_signing_key_algorithm=SHA512withEC pki_ca_signing_key_size=nistp384 pki_ca_signing_key_type=ecc pki_ca_signing_nickname=CA Signing Cert - %(pki_instance_name)s pki_ca_signing_signing_algorithm=SHA512withEC pki_ocsp_signing_token=NHSM-CONN-XC pki_ocsp_signing_key_algorithm=SHA512withEC pki_ocsp_signing_key_size=nistp384 pki_ocsp_signing_key_type=ecc pki_ocsp_signing_signing_algorithm=SHA512withEC pki_ds_hostname=rhds11.example.com pki_ds_base_dn=dc=ECC-RootCA pki_ds_database=CC-ECC-RootCA-LDAP pki_share_db=False ### Enable random serial numbers pki_random_serial_numbers_enable=True",
"[DEFAULT] pki_instance_name=rhcs10-ECC-OCSP-rootca pki_https_port=34443 pki_http_port=34080 ### Crypto Token pki_hsm_enable=True pki_hsm_libfile=/opt/nfast/toolkits/pkcs11/libcknfast.so pki_hsm_modulename=nfast pki_token_name=NHSM-CONN-XC pki_token_password=<YourHSMpassword> pki_audit_signing_token=NHSM-CONN-XC pki_audit_signing_key_algorithm=SHA512withEC pki_audit_signing_key_size=nistp521 pki_audit_signing_key_type=ecc pki_audit_signing_signing_algorithm=SHA512withEC pki_subsystem_token=NHSM-CONN-XC pki_subsystem_key_algorithm=SHA512withEC pki_subsystem_signing_algorithm=SHA256withEC pki_subsystem_key_size=nistp521 pki_subsystem_key_type=ecc pki_sslserver_token=NHSM-CONN-XC pki_sslserver_key_algorithm=SHA512withEC pki_sslserver_signing_algorithm=SHA512withEC pki_sslserver_key_size=nistp521 pki_sslserver_key_type=ecc ### CA cert chain concatenated in PEM format pki_cert_chain_path=/opt/pki_ecc/ca-chain.pem ### Bootstrap Admin pki_admin_password=SECret.123 pki_admin_key_type=ecc pki_admin_key_size=nistp521 pki_admin_key_algorithm=SHA512withEC ### Bootstrap Admin client dir pki_client_admin_cert_p12=/opt/pki_ecc/rhcs10-ECC-OCSP-rootca/ocsp_admin_cert.p12 pki_client_database_dir=/opt/pki_ecc/rhcs10-ECC-OCSP-rootca/certs_db pki_client_database_password=SECret.123 pki_client_database_purge=False pki_client_dir=/opt/pki_ecc/rhcs10-ECC-OCSP-rootca pki_client_pkcs12_password=SECret.123 ### Internal LDAP pki_ds_bind_dn=cn=Directory Manager pki_ds_ldap_port=2389 pki_ds_ldaps_port=2636 pki_ds_password=SECret.123 pki_ds_remove_data=True pki_ds_secure_connection=True pki_ds_secure_connection_ca_pem_file=/opt/pki_ecc/ca-chain.pem pki_ds_secure_connection_ca_nickname=CA Signing Cert - rhcs10-ECC-RootCA ### Security Domain pki_security_domain_hostname=rhcs10.example.com pki_security_domain_https_port=20443 pki_security_domain_password=SECret.123 pki_security_domain_user=caadmin [Tomcat] pki_ajp_port=34009 pki_tomcat_server_port=34005 [OCSP] pki_import_admin_cert=False pki_ocsp_signing_token=NHSM-CONN-XC pki_ocsp_signing_key_algorithm=SHA512withEC pki_ocsp_signing_key_size=nistp384 pki_ocsp_signing_key_type=ecc pki_ocsp_signing_signing_algorithm=SHA512withEC pki_admin_nickname=PKI Bootstrap Administrator for ECC-OCSP-rootca pki_admin_name=ocspadmin pki_admin_uid=ocspadmin [email protected] pki_ds_hostname=rhds11.example.com pki_ds_base_dn=dc=ECC-OCSP-rootca pki_ds_database=CC-ECC-OCSP-rootca-LDAP pki_share_db=False",
"[DEFAULT] pki_instance_name=rhcs10-ECC-SubCA pki_https_port=21443 pki_http_port=21080 ### Crypto Token pki_hsm_enable=True pki_hsm_libfile=/opt/nfast/toolkits/pkcs11/libcknfast.so pki_hsm_modulename=nfast pki_token_name=NHSM-CONN-XC pki_token_password=<YourHSMpassword> pki_audit_signing_token=NHSM-CONN-XC pki_audit_signing_key_algorithm=SHA512withEC pki_audit_signing_key_size=nistp521 pki_audit_signing_key_type=ecc pki_audit_signing_signing_algorithm=SHA512withEC pki_subsystem_token=NHSM-CONN-XC pki_subsystem_key_algorithm=SHA512withEC pki_subsystem_signing_algorithm=SHA256withEC pki_subsystem_key_size=nistp521 pki_subsystem_key_type=ecc pki_sslserver_token=NHSM-CONN-XC pki_sslserver_key_algorithm=SHA512withEC pki_sslserver_signing_algorithm=SHA512withEC pki_sslserver_key_size=nistp521 pki_sslserver_key_type=ecc ### CA cert chain concatenated in PEM format pki_cert_chain_path=/opt/pki_ecc/ca-chain.pem ### Bootstrap Admin pki_admin_password=SECret.123 pki_admin_key_type=ecc pki_admin_key_size=nistp521 pki_admin_key_algorithm=SHA512withEC ### Bootstrap Admin client dir pki_client_admin_cert_p12=/opt/pki_ecc/rhcs10-ECC-SubCA/ca_admin_cert.p12 pki_client_database_dir=/opt/pki_ecc/rhcs10-ECC-SubCA/certs_db pki_client_database_password=SECret.123 pki_client_dir=/opt/pki_ecc/rhcs10-ECC-SubCA pki_client_pkcs12_password=SECret.123 ### Internal LDAP pki_ds_bind_dn=cn=Directory Manager pki_ds_ldap_port=8389 pki_ds_ldaps_port=8636 pki_ds_password=SECret.123 pki_ds_remove_data=True pki_ds_secure_connection=True pki_ds_secure_connection_ca_pem_file=/opt/pki_ecc/temp-dirsrv-subca-cert.pem pki_ds_secure_connection_ca_nickname=DS temp CA certificate [Tomcat] pki_ajp_port=21009 pki_tomcat_server_port=21005 [CA] pki_subordinate=True pki_issuing_ca_https_port=20443 pki_issuing_ca_hostname=rhcs10.example.com pki_issuing_ca=https://rhcs10.example.com:20443 ### New Security Domain pki_security_domain_hostname=rhcs10.example.com pki_security_domain_https_port=20443 pki_security_domain_password=SECret.123 pki_subordinate_create_new_security_domain=True pki_subordinate_security_domain_name=Example-rhcs10-ECC-SubCA pki_import_admin_cert=False pki_admin_nickname=PKI Bootstrap Administrator for ECC-SubCA pki_admin_name=caadmin pki_admin_uid=caadmin [email protected] pki_ca_signing_token=NHSM-CONN-XC pki_ca_signing_key_algorithm=SHA512withEC pki_ca_signing_key_size=nistp384 pki_ca_signing_key_type=ecc pki_ca_signing_nickname=CA Signing Cert - %(pki_instance_name)s pki_ca_signing_signing_algorithm=SHA512withEC pki_ocsp_signing_token=NHSM-CONN-XC pki_ocsp_signing_key_algorithm=SHA512withEC pki_ocsp_signing_key_size=nistp384 pki_ocsp_signing_key_type=ecc pki_ocsp_signing_signing_algorithm=SHA512withEC pki_ds_hostname=rhds11.example.com pki_ds_base_dn=dc=ECC-SubCA pki_ds_database=CC-ECC-SubCA-LDAP pki_share_db=False ### Enable random serial numbers pki_random_serial_numbers_enable=True",
"[DEFAULT] pki_instance_name=rhcs10-ECC-OCSP-subca pki_https_port=22443 pki_http_port=22080 ### Crypto Token pki_hsm_enable=True pki_hsm_libfile=/opt/nfast/toolkits/pkcs11/libcknfast.so pki_hsm_modulename=nfast pki_token_name=NHSM-CONN-XC pki_token_password=<YourHSMpassword> pki_audit_signing_token=NHSM-CONN-XC pki_audit_signing_key_algorithm=SHA512withEC pki_audit_signing_key_size=nistp521 pki_audit_signing_key_type=ecc pki_audit_signing_signing_algorithm=SHA512withEC pki_subsystem_token=NHSM-CONN-XC pki_subsystem_key_algorithm=SHA512withEC pki_subsystem_signing_algorithm=SHA256withEC pki_subsystem_key_size=nistp521 pki_subsystem_key_type=ecc pki_sslserver_token=NHSM-CONN-XC pki_sslserver_key_algorithm=SHA512withEC pki_sslserver_signing_algorithm=SHA512withEC pki_sslserver_key_size=nistp521 pki_sslserver_key_type=ecc ### CA cert chain concatenated in PEM format pki_cert_chain_path=/opt/pki_ecc/ca-chain.pem ### Bootstrap Admin pki_admin_password=SECret.123 pki_admin_key_type=ecc pki_admin_key_size=nistp521 pki_admin_key_algorithm=SHA512withEC ### Bootstrap Admin client dir pki_client_admin_cert_p12=/opt/pki_ecc/rhcs10-ECC-OCSP-subca/ocsp_admin_cert.p12 pki_client_database_dir=/opt/pki_ecc/rhcs10-ECC-OCSP-subca/certs_db pki_client_database_password=SECret.123 pki_client_database_purge=False pki_client_dir=/opt/pki_ecc/rhcs10-ECC-OCSP-subca pki_client_pkcs12_password=SECret.123 ### Internal LDAP pki_ds_bind_dn=cn=Directory Manager pki_ds_ldap_port=9389 pki_ds_ldaps_port=9636 pki_ds_password=SECret.123 pki_ds_remove_data=True pki_ds_secure_connection=True pki_ds_secure_connection_ca_pem_file=/opt/pki_ecc/ca-chain.pem pki_ds_secure_connection_ca_nickname=CA Signing Cert - rhcs10-ECC-SubCA ### Security Domain pki_security_domain_hostname=rhcs10.example.com pki_security_domain_https_port=21443 pki_security_domain_password=SECret.123 pki_security_domain_user=caadmin [Tomcat] pki_ajp_port=22009 pki_tomcat_server_port=22005 [OCSP] pki_import_admin_cert=False pki_ocsp_signing_token=NHSM-CONN-XC pki_ocsp_signing_key_algorithm=SHA512withEC pki_ocsp_signing_key_size=nistp384 pki_ocsp_signing_key_type=ecc pki_ocsp_signing_signing_algorithm=SHA512withEC pki_admin_nickname=PKI Bootstrap Administrator for ECC-OCSP-subca pki_admin_name=ocspadmin pki_admin_uid=ocspadmin [email protected] pki_ds_hostname=rhds11.example.com pki_ds_base_dn=dc=ECC-OCSP-subca pki_ds_database=CC-ECC-OCSP-subca-LDAP pki_share_db=False",
"[DEFAULT] pki_instance_name=rhcs10-ECC-KRA pki_https_port=23443 pki_http_port=23080 ### Crypto Token pki_hsm_enable=True pki_hsm_libfile=/opt/nfast/toolkits/pkcs11/libcknfast.so pki_hsm_modulename=nfast pki_token_name=NHSM-CONN-XC pki_token_password=<YourHSMpassword> pki_audit_signing_token=NHSM-CONN-XC pki_audit_signing_key_algorithm=SHA512withEC pki_audit_signing_key_size=nistp521 pki_audit_signing_key_type=ecc pki_audit_signing_signing_algorithm=SHA512withEC pki_subsystem_token=NHSM-CONN-XC pki_subsystem_key_algorithm=SHA512withEC pki_subsystem_signing_algorithm=SHA256withEC pki_subsystem_key_size=nistp521 pki_subsystem_key_type=ecc pki_sslserver_token=NHSM-CONN-XC pki_sslserver_key_algorithm=SHA512withEC pki_sslserver_signing_algorithm=SHA512withEC pki_sslserver_key_size=nistp521 pki_sslserver_key_type=ecc ### CA cert chain concatenated in PEM format pki_cert_chain_path=/opt/pki_ecc/ca-chain.pem ### Bootstrap Admin pki_admin_password=SECret.123 pki_admin_key_type=ecc pki_admin_key_size=nistp521 pki_admin_key_algorithm=SHA512withEC ### Bootstrap Admin client dir pki_client_admin_cert_p12=/opt/pki_ecc/rhcs10-ECC-KRA/kra_admin_cert.p12 pki_client_database_dir=/opt/pki_ecc/rhcs10-ECC-KRA/certs_db pki_client_database_password=SECret.123 pki_client_database_purge=False pki_client_dir=/opt/pki_ecc/rhcs10-ECC-KRA pki_client_pkcs12_password=SECret.123 ### Internal LDAP pki_ds_bind_dn=cn=Directory Manager pki_ds_ldap_port=4389 pki_ds_ldaps_port=4636 pki_ds_password=SECret.123 pki_ds_remove_data=True pki_ds_secure_connection=True pki_ds_secure_connection_ca_pem_file=/opt/pki_ecc/ca-chain.pem pki_ds_secure_connection_ca_nickname=CA Signing Cert - rhcs10-ECC-SubCA ### Security Domain pki_security_domain_hostname=rhcs10.example.com pki_security_domain_https_port=21443 pki_security_domain_password=SECret.123 pki_security_domain_user=caadmin [Tomcat] pki_ajp_port=23009 pki_tomcat_server_port=23005 [KRA] pki_import_admin_cert=False pki_storage_token=NHSM-CONN-XC pki_storage_key_algorithm=SHA512withEC pki_storage_key_size=nistp521 pki_storage_key_type=ecc pki_storage_signing_algorithm=SHA512withEC pki_transport_token=NHSM-CONN-XC pki_transport_key_algorithm=SHA512withEC pki_transport_key_size=nistp521 pki_transport_key_type=ecc pki_transport_signing_algorithm=SHA512withEC pki_admin_nickname=PKI Bootstrap Administrator for ECC-KRA pki_admin_name=kraadmin pki_admin_uid=kraadmin [email protected] pki_ds_hostname=rhds11.example.com pki_ds_base_dn=dc=ECC-KRA pki_ds_database=CC-ECC-KRA-LDAP pki_share_db=False",
"<SSLHostConfig sslProtocol=\"TLS\" protocols=\"TLSv1.2\" certificateVerification=\"optional\" ciphers=\"ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384\">",
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
"ca.connector.KRA.clientCiphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
"dsconf -D \"cn=Directory Manager\" ldap://rhds11.example.com:8389 security ciphers list --enabled",
"dsconf -D \"cn=Directory Manager\" ldap://rhds11.example.com:8389 security ciphers set \"+TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,+TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,+TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,+\""
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/ecc_install_configure |
Chapter 2. Running Tempest tests using Test Operator | Chapter 2. Running Tempest tests using Test Operator Use the Red Hat OpenStack Services on OpenShift (RHOSO) test-operator when working with Tempest tests. Prerequisites A deployed RHOSO environment. Ensure that you are in the OpenStack project: 2.1. Tempest Custom Resources configuration file This test-v1beta1-tempest.yaml file is an example of a Tempest custom resource (CR) that you can edit and use to execute Tempest tests with test-operator . Warning Use the privileged parameter with caution. A value of true can create security vulnerabilities. 2.2. Installing Test Operator using OperatorHub Install the test-operator in the openstack-operators project using OperatorHub . Procedure Log in to the Red Hat OpenShift Container Platform (RHOCP) web console as a user with cluster-admin permissions. Select Operators > OperatorHub . Optional: In the Filter by keyword field, begin typing OpenStack to filter the list of Operators. Select the OpenStack Test Operator tile with the Red Hat source label from the list of Operators. Read the information about the Operator and click Install . On the page with the heading Install Operator located at OperatorHub > Operator Installation , select Select a Namespace in the Installed Namespace section. Optional: In the Select Project field, begin typing openstack-operators to filter the list of Projects. Select openstack-operators from the list of Projects. Click Install to make the Operator available to the openstack-operators namespace. Wait for the Operator to install and then click View Operator to check that the Status of the OpenStack Test Operator is Succeeded . Verification When the test-operator-controller-manager pod successfully spawns and the pod is running, you can communicate with the operator using the custom resources (CRs) that the test-operator accepts: 2.3. Running Tempest tests Select and run an image to use for Tempest tests. The following file names are examples and might vary from your environment. Procedure Edit the Tempest test configuration file, for example with vim : Replace <Tempest_config> with the name of your Tempest test configuration file, such as test_v1beta1_tempest.yaml . Add the appropriate value for the containerImage parameter: The openstack-tempest-all:current-podified image in this example contains all the default supported plug-ins. Save and close the Tempest test configuration file. Create the pod and run your Tempest tests: Replace <Tempest_config> with the name of your Tempest test configuration file, such as test_v1beta1_tempest.yaml . Verification Check that the pod is running: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. 2.4. Finding Tempest logs You can access the Tempest logs, for example for a test that successfully completed, or to troubleshoot a pod that has failed. Caution The test pods that run in the openstack namespace have access to the secrets in that namespace, such as clouds.yaml . These secrets might propagate into test logs stored on persistent volume claims (PVCs) created by the test-operator . Procedure Get the name and status of the relevant pod: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. Get the logs: Replace <pod_name> with the name of the pod that you got in the step. Verification View the logs. 2.5. Getting logs from inside the pod You can access the Tempest logs, for example, for a test that successfully completed, or to troubleshoot a pod that has failed. You can access specific and more detailed Tempest logs from inside the pod. Procedure Get the name and status of the relevant pod: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. Access the pod: Replace <pod_name> with the name of the pod that you got in the step. View available log files inside the pod: View available log files in the required directory: Replace <tempest-tests> with the name of the relevant directory that you want to view logs in, for example tempest-tests . Verification View the logs. 2.6. Re-running Tempest tests Modify the Tempest configuration file and re-run the Tempest tests. Procedure Get the name and status of the relevant pod: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. Note If the pod is still active, you can wait for the test to complete before proceeding to the following step. Get the name of the Tempest custom resource (CR): Delete the Tempest CR: Replace <tempest_cr> with the name of the Tempest CR that you got in the step. Verify that you deleted the pod: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. Edit the Tempest test configuration file, for example with vim : Replace <Tempest_config> with the name of your Tempest test configuration file, such as test_v1beta1_tempest.yaml . Make the required edits to the Tempest test configuration file, for example you can modify the excludeList: parameter: Replace <excludeList_value> with the excludeList value that you want to test, such as tempest.api.identity.v3.* . Save and close the Tempest test configuration file. Create the pod for your Tempest tests: Replace <Tempest_config> with the name of your Tempest test configuration file, such as test_v1beta1_tempest.yaml . Verification Get the name of the pod that you created in the step: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. Check in the logs for the expected changes: Replace <pod_name> with the name of the relevant pod that you got in the step and replace <excludeList_value> with the excludeList value that you added to the Tempest test configuration file, such as tempest.api.identity.v3.* . 2.7. Installing external plug-ins You can install external plug-ins, such as barbican-tempest-plugin . Note The barbican-tempest-plugin is included with the image registry.redhat.io/rhoso/openstack-tempest-all-rhel9:18.0 and is shown in the following procedure as an example. If you are using external plug-ins that are unsupported, ensure that you proceed with caution. Procedure Edit the Tempest test configuration file, for example with vim : Replace <Tempest_config> with the name of your Tempest test configuration file, such as test_v1beta1_tempest.yaml . Add externalPlugin option, or uncomment the relevant lines in your Tempest test configuration file: Save and close the Tempest test configuration file. Create the new pod for the Tempest tests: Replace <Tempest_config> with the name of your Tempest test configuration file, such as test_v1beta1_tempest.yaml . Verification Get the name and status of the pod that you created in the step: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. 2.8. Fixing pod in pending state You can use the following procedure to fix a pod that is in a Pending state caused by a lack of available persistent volumes. Procedure Get the name of the relevant pod and verify it has a status of Pending : Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. Confirm that the Pending status is caused by a lack of available persistent volumes: Replace <pod_name> with the name of the pod that you got in the step. List all persistent volumes that are associated with Tempest: Edit one of the persistent volumes to change the claim reference value to null : Replace <name_of_volume> with the name of one of the Tempest volume that you got from the step. Verification Confirm that the volume that you edited has changed from Released to Bound : Confirm that status of the pod has changed from Pending : Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. 2.9. Using debug mode With debug mode, you can keep the pod running if the test finishes or in case of a failure, and use a remote shell to get more information and detail. Procedure Edit the Tempest test configuration file, for example with vim : Replace <Tempest_config> with the name of your Tempest test configuration file, such as test_v1beta1_tempest.yaml . Change the value of debug: parameter to true , or add the line debug: true to the configuration file: Save and close the Tempest test configuration file. Create the new pod for the Tempest tests: Replace <Tempest_config> with the name of your Tempest test configuration file, such as test_v1beta1_tempest.yaml . Verification Get the name of the pod that you created in the step: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. Access the pod remotely: Replace <pod_name> with the name of the pod that you got in the step. Make changes or check errors in the running pod: 2.10. Using pudb to debug Tempest tests You can use pudb to create and customize breakpoints that you can use to debug your Tempest tests. Prerequisites You have configured debug mode. For more information about debug mode, see Section 2.9, "Using debug mode" . Procedure Get the name of the pod that you want to use pudb with: Replace <pod_name> with the name that you specified in your Tempest Custom Resources configuration file, for example tempest-tests , or you can just use USD oc get pods and search for the relevant pod. Access the pod remotely: Replace <pod_name> with the name of the pod that you got in the step. Navigate to the correct directory: Create a Python3 lightweight virtual environment: Activate the Python3 lightweight virtual environment: Download and install pudb in the Python3 lightweight virtual environment: Find the path to the file that you want to debug, for example test_networks.py : Open your chosen file for editing: Insert the line import pudb; pu.db into the file where you want to create a pudb breakpoint, and save and close the file. Change the ownership: Run the test with the pudb breakpoint: Verification The pudb interface opens. You can interact with the pudb interface before the test completes. | [
"oc project openstack Now using project \"openstack\" on server \"https://api.crc.testing:6443\".",
"apiVersion: test.openstack.org/v1beta1 kind: Tempest metadata: name: tempest-tests namespace: openstack spec: containerImage: \"\" # storageClass: local-storage # parallel: false # debug: false # configOverwrite # --------------- # An interface to overwrite default config files like e.g. logging.conf But can also # be used to add additional files. Those get added to the service config dir in # /etc/test_operator/<file> # # configOverwrite: # file.txt: | # content of the file # SSHKeySecretName # ---------------- # SSHKeySecretName is the name of the k8s secret that contains an ssh key. The key is # mounted to ~/.ssh/id_ecdsa in the tempest pod. Note, the test-operator looks for # the private key in ssh-privatekey field of the secret. # # SSHKeySecretName: secret_name # Privileged # ---------- # When the privileged parameter has the default value of false, the test pods spawn with # allowedPrivilegedEscalation: false and without NET_ADMIN, NET_RAW, and the default capabilities. # You must set the value of the privileged parameter to true for some test-operator functionalities, # such as extraRPMs in Tempest CR, or some set tobiko tests, but setting privileged: true can decrease security. # privileged: false tempestRun: # NOTE: All parameters have default values (use only when you want to override # the default behaviour) includeList: | # <-- Use | to preserve \\n tempest.api.identity.v3.* concurrency: 8 # excludeList: | # <-- Use | to preserve \\n # tempest.api.identity.v3.* # workerFile: | # <-- Use | to preserve \\n # - worker: # - tempest.api.* # - neutron_tempest_tests # - worker: # - tempest.scenario.* # smoke: false # serial: false # parallel: true # externalPlugin: # - repository: \"https://opendev.org/openstack/barbican-tempest-plugin.git\" # - repository: \"https://opendev.org/openstack/neutron-tempest-plugin.git\" # changeRepository: \"https://review.opendev.org/openstack/neutron-tempest-plugin\" # changeRefspec: \"refs/changes/97/896397/2\" # extraImages: # - URL: https://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img # name: cirros-0.6.2-test-operator # flavor: # name: cirros-0.6.2-test-operator-flavor # RAM: 512 # disk: 20 # vcpus: 1 # extraRPMs: # ---------- # A list of URLs that point to RPMs that should be installed before # the execution of tempest. WARNING! This parameter has no efect when used # in combination with externalPlugin parameter. # extraRPMs: # - https://cbs.centos.org/kojifiles/packages/python-sshtunnel/0.4.0/12.el9s/noarch/python3-sshtunnel-0.4.0-12.el9s.noarch.rpm # - https://cbs.centos.org/kojifiles/packages/python-whitebox-tests-tempest/0.0.3/0.1.766ff04git.el9s/noarch/python3-whitebox-tests-tempest-0.0.3-0.1.766ff04git.el9s.noarch.rpm tempestconfRun: # NOTE: All parameters have default values (use only when you want to override # the default behaviour) # create: true # collectTiming: false # insecure: false # noDefaultDeployer: false # debug: false # verbose: false # nonAdmin: false # retryImage: false # convertToRaw: false # out: ./etc/tempest.conf # flavorMinMem: 128 # flavorMinDisk: 1 # timeout: 600 # imageDiskFormat: qcow2 # image: https://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img # The following text will be mounted to the tempest pod # as /etc/test_operator/deployer_input.yaml # deployerInput: | # [section] # value1 = exmaple_value2 # value2 = example_value2 # The following text will be mounted to the tempest pod # as /etc/test_operator/accounts.yaml # testAccounts: | # - username: 'multi_role_user' # tenant_name: 'test_tenant_42' # password: 'test_password' # roles: # - 'fun_role' # - 'not_an_admin' # - 'an_admin' # The following text will be mounted to the tempest pod # as /etc/test_operator/profile.yaml # profile: | # collect_timing: false # create: false # create_accounts_file: null # createAccountsFile: /path/to/accounts.yaml # generateProfile: /path/to/profile.yaml # networkID: # append: | # <-- Use | to preserve \\n # section1.name1 value1 # section1.name1 value2 # remove: | # <-- Use | to preserve \\n # section1.name1 value1 # section1.name1 value2 # overrides: | # <-- Use | to preserve \\n # overrides_section1.name1 value1 # overrides_section1.name1 value2 # Workflow # -------- # Workflow section can be utilized to spawn multiple test pods at the same time. # The commented out example spawns two test pods that are executed sequentially. # Each step inherits all configuration that is specified outside of the workflow # field. For each step you can overwrite values specified in the tempestRun and # tempestconfRun sections. # # workflow: # - stepName: firstStep # tempestRun: # includeList: | # tempest.api.* # - stepName: secondStep # tempestRun: # includeList: | # neutron_tempest_plugin.*",
"oc get pods -n openstack-operators",
"vim <Tempest_config>",
"registry.redhat.io/rhoso/openstack-tempest-all-rhel9:18.0",
"oc apply -f <Tempest_config>",
"oc get pods | grep -i <pod_name>",
"oc get pods | grep -i <pod_name>",
"oc logs <pod_name>",
"oc get pods | grep -i <pod_name>",
"oc debug <pod_name>",
"sh-5.1USD ls -lah /var/lib/tempest/external_files",
"sh-5.1USD ls -lah /var/lib/tempest/external_files/<tempest-tests>",
"oc get pods | grep -i <pod_name>",
"oc get tempest",
"oc delete tempest <tempest_cr>",
"oc get pods | grep -i <pod_name>",
"vim <Tempest_config>",
"excludeList: | # <-- Use | to preserve \\n <excludeList_value>",
"oc apply -f <Tempest_config>",
"oc get pods | grep -i <pod_name>",
"oc logs <pod_name> | grep <excludeList_value> --context=4",
"vim <Tempest_config>",
"externalPlugin: - repository: \"https://opendev.org/openstack/barbican-tempest-plugin.git\"",
"oc apply -f <Tempest_config>",
"oc get pods | grep -i <pod_name>",
"oc get pods | grep -i <pod_name>",
"oc describe pod <pod_name>",
"oc get pv | grep -i tempest",
"oc patch pv <name_of_volume> -p '{\"spec\":{\"claimRef\":null}}'",
"oc get pv | grep -i tempest",
"oc get pods | grep -i <pod_name>",
"vim <Tempest_config>",
"apiVersion: test.openstack.org/v1beta1 kind: Tempest metadata: name: tempest-tests namespace: openstack spec: containerImage: registry.redhat.io/rhoso/openstack-tempest-all-rhel9:18.0 debug: true",
"oc apply -f <Tempest_config>",
"oc get pods | grep -i <pod_name>",
"oc rsh <pod_name>",
"sh-5.1USD ls -lah /var/lib/tempest",
"oc get pods | grep -i <pod_name>",
"oc rsh <pod_name>",
"sh-5.1USD cd /var/lib/tempest/openshift",
"sh-5.1USD python3 -m venv --system-site-packages .venv",
"sh-5.1USD . .venv/bin/activate",
"(.venv) sh-5.1USD pip install pudb",
"(.venv) sh-5.1USD find / -name test_networks.py 2> /dev/null",
"(.venv) sh-5.1USD sudo vi /usr/lib/python3.9/site-packages/tempest/api/network/test_networks.py",
"(.venv) sh-5.1 USD sudo chown -R tempest:tempest /var/lib/tempest/.config",
"(.venv) sh-5.1 USD python -m testtools.run tempest.api.network.test_networks.NetworksTest.test_list_networks"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/validating_and_troubleshooting_the_deployed_cloud/using-tempest-operator_diagnostics |
Chapter 8. Deployment [apps/v1] | Chapter 8. Deployment [apps/v1] Description Deployment enables declarative updates for Pods and ReplicaSets. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DeploymentSpec is the specification of the desired behavior of the Deployment. status object DeploymentStatus is the most recently observed status of the Deployment. 8.1.1. .spec Description DeploymentSpec is the specification of the desired behavior of the Deployment. Type object Required selector template Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Indicates that the deployment is paused. progressDeadlineSeconds integer The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas integer Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit integer The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector LabelSelector Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy object DeploymentStrategy describes how to replace existing pods with new ones. template PodTemplateSpec Template describes the pods that will be created. 8.1.2. .spec.strategy Description DeploymentStrategy describes how to replace existing pods with new ones. Type object Property Type Description rollingUpdate object Spec to control the desired behavior of rolling update. type string Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. Possible enum values: - "Recreate" Kill all existing pods before creating new ones. - "RollingUpdate" Replace the old ReplicaSets by new one using rolling update i.e gradually scale down the old ReplicaSets and scale up the new one. 8.1.3. .spec.strategy.rollingUpdate Description Spec to control the desired behavior of rolling update. Type object Property Type Description maxSurge IntOrString The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable IntOrString The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 8.1.4. .status Description DeploymentStatus is the most recently observed status of the Deployment. Type object Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this deployment. collisionCount integer Count of hash collisions for the Deployment. The Deployment controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ReplicaSet. conditions array Represents the latest available observations of a deployment's current state. conditions[] object DeploymentCondition describes the state of a deployment at a certain point. observedGeneration integer The generation observed by the deployment controller. readyReplicas integer readyReplicas is the number of pods targeted by this Deployment with a Ready Condition. replicas integer Total number of non-terminated pods targeted by this deployment (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this deployment. This is the total number of pods that are still required for the deployment to have 100% available capacity. They may either be pods that are running but not yet available or pods that still have not been created. updatedReplicas integer Total number of non-terminated pods targeted by this deployment that have the desired template spec. 8.1.5. .status.conditions Description Represents the latest available observations of a deployment's current state. Type array 8.1.6. .status.conditions[] Description DeploymentCondition describes the state of a deployment at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of deployment condition. 8.2. API endpoints The following API endpoints are available: /apis/apps/v1/deployments GET : list or watch objects of kind Deployment /apis/apps/v1/watch/deployments GET : watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/deployments DELETE : delete collection of Deployment GET : list or watch objects of kind Deployment POST : create a Deployment /apis/apps/v1/watch/namespaces/{namespace}/deployments GET : watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/deployments/{name} DELETE : delete a Deployment GET : read the specified Deployment PATCH : partially update the specified Deployment PUT : replace the specified Deployment /apis/apps/v1/watch/namespaces/{namespace}/deployments/{name} GET : watch changes to an object of kind Deployment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status GET : read status of the specified Deployment PATCH : partially update status of the specified Deployment PUT : replace status of the specified Deployment 8.2.1. /apis/apps/v1/deployments Table 8.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Deployment Table 8.2. HTTP responses HTTP code Reponse body 200 - OK DeploymentList schema 401 - Unauthorized Empty 8.2.2. /apis/apps/v1/watch/deployments Table 8.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. Table 8.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.3. /apis/apps/v1/namespaces/{namespace}/deployments Table 8.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Deployment Table 8.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 8.8. Body parameters Parameter Type Description body DeleteOptions schema Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Deployment Table 8.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.11. HTTP responses HTTP code Reponse body 200 - OK DeploymentList schema 401 - Unauthorized Empty HTTP method POST Description create a Deployment Table 8.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.13. Body parameters Parameter Type Description body Deployment schema Table 8.14. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 202 - Accepted Deployment schema 401 - Unauthorized Empty 8.2.4. /apis/apps/v1/watch/namespaces/{namespace}/deployments Table 8.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 8.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. Table 8.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.5. /apis/apps/v1/namespaces/{namespace}/deployments/{name} Table 8.18. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 8.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Deployment Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.21. Body parameters Parameter Type Description body DeleteOptions schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Deployment Table 8.23. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Deployment Table 8.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.25. Body parameters Parameter Type Description body Patch schema Table 8.26. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Deployment Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body Deployment schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty 8.2.6. /apis/apps/v1/watch/namespaces/{namespace}/deployments/{name} Table 8.30. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 8.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Deployment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 8.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 8.2.7. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status Table 8.33. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 8.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Deployment Table 8.35. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Deployment Table 8.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 8.37. Body parameters Parameter Type Description body Patch schema Table 8.38. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Deployment Table 8.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.40. Body parameters Parameter Type Description body Deployment schema Table 8.41. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/workloads_apis/deployment-apps-v1 |
Chapter 3. Installing Satellite Server | Chapter 3. Installing Satellite Server When you install Satellite Server from a connected network, you can obtain packages and receive updates directly from the Red Hat Content Delivery Network. Note You cannot register Satellite Server to itself. Use the following procedures to install Satellite Server, perform the initial configuration, and import subscription manifests. For more information on subscription manifests, see Managing Red Hat Subscriptions in the Content Management Guide . Note that the Satellite installation script is based on Puppet, which means that if you run the installation script more than once, it might overwrite any manual configuration changes. To avoid this and determine which future changes apply, use the --noop argument when you run the installation script. This argument ensures that no actual changes are made. Potential changes are written to /var/log/foreman-installer/satellite.log . Files are always backed up and so you can revert any unwanted changes. For example, in the foreman-installer logs, you can see an entry similar to the following about Filebucket: You can restore the file as follows: 3.1. Configuring the HTTP Proxy to Connect to Red Hat CDN Prerequisites Your network gateway and the HTTP proxy must allow access to the following hosts: Host name Port Protocol subscription.rhsm.redhat.com 443 HTTPS cdn.redhat.com 443 HTTPS *.akamaiedge.net 443 HTTPS cert.console.redhat.com (if using Red Hat Insights) 443 HTTPS api.access.redhat.com (if using Red Hat Insights) 443 HTTPS cert-api.access.redhat.com (if using Red Hat Insights) 443 HTTPS Satellite Server uses SSL to communicate with the Red Hat CDN securely. Use of an SSL interception proxy interferes with this communication. These hosts must be whitelisted on the proxy. For a list of IP addresses used by the Red Hat CDN (cdn.redhat.com), see the Knowledgebase article Public CIDR Lists for Red Hat on the Red Hat Customer Portal. To configure the subscription-manager with the HTTP proxy, follow the procedure below. Procedure On Satellite Server, complete the following details in the /etc/rhsm/rhsm.conf file: 3.2. Registering to Red Hat Subscription Management Registering the host to Red Hat Subscription Management enables the host to subscribe to and consume content for any subscriptions available to the user. This includes content such as Red Hat Enterprise Linux and Red Hat Satellite. For Red Hat Enterprise Linux 7, it also provides access to Red Hat Software Collections (RHSCL). Procedure Register your system with the Red Hat Content Delivery Network, entering your Customer Portal user name and password when prompted: The command displays output similar to the following: 3.3. Attaching the Satellite Infrastructure Subscription Note Skip this step if you have SCA enabled on Red Hat Customer Portal. There is no requirement of attaching the Red Hat Satellite Infrastructure Subscription to the Satellite Server using subscription-manager. For more information about SCA, see Simple Content Access . After you have registered Satellite Server, you must identify your subscription Pool ID and attach an available subscription. The Red Hat Satellite Infrastructure subscription provides access to the Red Hat Satellite and Red Hat Enterprise Linux content. For Red Hat Enterprise Linux 7, it also provides access to Red Hat Software Collections (RHSCL). This is the only subscription required. Red Hat Satellite Infrastructure is included with all subscriptions that include Satellite, formerly known as Smart Management. For more information, see Satellite Infrastructure Subscriptions MCT3718 MCT3719 in the Red Hat Knowledgebase . Subscriptions are classified as available if they are not already attached to a system. If you are unable to find an available Satellite subscription, see the Red Hat Knowledgebase solution How do I figure out which subscriptions have been consumed by clients registered under Red Hat Subscription Manager? to run a script to see if another system is consuming your subscription. Procedure Identify the Pool ID of the Satellite Infrastructure subscription: The command displays output similar to the following: Make a note of the subscription Pool ID. Your subscription Pool ID is different from the example provided. Attach the Satellite Infrastructure subscription to the base operating system that your Satellite Server is running on. If SCA is enabled on Satellite Server, you can skip this step: The command displays output similar to the following: Optional: Verify that the Satellite Infrastructure subscription is attached: 3.4. Configuring Repositories Use this procedure to enable the repositories that are required to install Satellite Server. Choose from the available list which operating system and version you are installing on: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 7 3.4.1. Red Hat Enterprise Linux 8 Disable all repositories: Enable the following repositories: Enable the module: Note Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Life Cycle . 3.4.2. Red Hat Enterprise Linux 7 Disable all repositories: Enable the following repositories: Note If you are installing Satellite Server as a virtual machine hosted on Red Hat Virtualization, you must also enable the Red Hat Common repository, and install Red Hat Virtualization guest agents and drivers. For more information, see Installing the Guest Agents and Drivers on Red Hat Enterprise Linux in the Virtual Machine Management Guide . 3.5. Installing Satellite Server Packages Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 7 3.5.1. Red Hat Enterprise Linux 8 Procedure Update all packages: Install Satellite Server packages: 3.5.2. Red Hat Enterprise Linux 7 Update all packages: Install Satellite Server packages: 3.6. Synchronizing the System Clock With chronyd To minimize the effects of time drift, you must synchronize the system clock on the base operating system on which you want to install Satellite Server with Network Time Protocol (NTP) servers. If the base operating system clock is configured incorrectly, certificate verification might fail. For more information about the chrony suite, see Using the Chrony suite to configure NTP in Red Hat Enterprise Linux 8 Configuring basic system settings , and Configuring NTP Using the chrony Suite in the Red Hat Enterprise Linux 7 System Administrator's Guide . Procedure Install the chrony package: Start and enable the chronyd service: 3.7. Installing the SOS Package on the Base Operating System Install the sos package on the base operating system so that you can collect configuration and diagnostic information from a Red Hat Enterprise Linux system. You can also use it to provide the initial system analysis, which is required when opening a service request with Red Hat Technical Support. For more information on using sos , see the Knowledgebase solution What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? on the Red Hat Customer Portal. Procedure Install the sos package: 3.8. Configuring Satellite Server Install Satellite Server using the satellite-installer installation script. This method is performed by running the installation script with one or more command options. The command options override the corresponding default initial configuration options and are recorded in the Satellite answer file. You can run the script as often as needed to configure any necessary options. Note Depending on the options that you use when running the Satellite installer, the configuration can take several minutes to complete. 3.8.1. Configuring Satellite Installation This initial configuration procedure creates an organization, location, user name, and password. After the initial configuration, you can create additional organizations and locations if required. The initial configuration also installs PostgreSQL databases on the same server. The installation process can take tens of minutes to complete. If you are connecting remotely to the system, use a utility such as tmux that allows suspending and reattaching a communication session so that you can check the installation progress in case you become disconnected from the remote system. If you lose connection to the shell where the installation command is running, see the log at /var/log/foreman-installer/satellite.log to determine if the process completed successfully. Considerations Use the satellite-installer --scenario satellite --help command to display the available options and any default values. If you do not specify any values, the default values are used. Specify a meaningful value for the option: --foreman-initial-organization . This can be your company name. An internal label that matches the value is also created and cannot be changed afterwards. If you do not specify a value, an organization called Default Organization with the label Default_Organization is created. You can rename the organization name but not the label. Remote Execution is the primary method of managing packages on Content Hosts. If you want to use the deprecated Katello Agent instead of Remote Execution SSH, use the --foreman-proxy-content-enable-katello-agent=true option to enable it. The same option should be given on any Capsule Server as well as Satellite Server. By default, all configuration files configured by the installer are managed by Puppet. When satellite-installer runs, it overwrites any manual changes to the Puppet managed files with the initial values. If you want to manage DNS files and DHCP files manually, use the --foreman-proxy-dns-managed=false and --foreman-proxy-dhcp-managed=false options so that Puppet does not manage the files related to the respective services. For more information on how to apply custom configuration on other services, see Applying Custom Configuration to Satellite . Procedure Enter the following command with any additional options that you want to use: The script displays its progress and writes logs to /var/log/foreman-installer/satellite.log . 3.9. Importing a Red Hat Subscription Manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Prerequisites You must have a Red Hat subscription manifest file exported from the Customer Portal. For more information, see Creating and Managing Manifests in Using Red Hat Subscription Management . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Browse . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . If the Manage Manifest window does not close automatically, click Close to return to the Subscriptions window. CLI procedure Copy the Red Hat subscription manifest file from your client to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in the Content Management guide. | [
"/Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket -l restore /etc/dhcp/dhcpd.conf 622d9820b8e764ab124367c68f5fa3a1",
"an http proxy server to use (enter server FQDN) proxy_hostname = myproxy.example.com port for http proxy server proxy_port = 8080 user name for authenticating to an http proxy, if needed proxy_user = password for basic http proxy auth, if needed proxy_password =",
"subscription-manager register",
"subscription-manager register Username: user_name Password: The system has been registered with ID: 541084ff2-44cab-4eb1-9fa1-7683431bcf9a",
"subscription-manager list --all --available --matches 'Red Hat Satellite Infrastructure Subscription'",
"Subscription Name: Red Hat Satellite Infrastructure Subscription Provides: Red Hat Satellite Red Hat Software Collections (for RHEL Server) Red Hat CodeReady Linux Builder for x86_64 Red Hat Ansible Engine Red Hat Enterprise Linux Load Balancer (for RHEL Server) Red Hat Red Hat Software Collections (for RHEL Server) Red Hat Enterprise Linux Server Red Hat Satellite Capsule Red Hat Enterprise Linux for x86_64 Red Hat Enterprise Linux High Availability for x86_64 Red Hat Satellite Red Hat Satellite 5 Managed DB Red Hat Satellite 6 Red Hat Discovery SKU: MCT3719 Contract: 11878983 Pool ID: 8a85f99968b92c3701694ee998cf03b8 Provides Management: No Available: 1 Suggested: 1 Service Level: Premium Service Type: L1-L3 Subscription Type: Standard Ends: 03/04/2020 System Type: Physical",
"subscription-manager attach --pool= pool_id",
"Successfully attached a subscription for: Red Hat Satellite Infrastructure Subscription",
"subscription-manager list --consumed",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-6.11-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.11-for-rhel-8-x86_64-rpms",
"dnf module enable satellite:el8",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-server-rhscl-7-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=rhel-7-server-satellite-6.11-rpms --enable=rhel-7-server-satellite-maintenance-6.11-rpms",
"dnf update",
"dnf install satellite",
"yum update",
"yum install satellite",
"yum install chrony",
"systemctl start chronyd systemctl enable chronyd",
"yum install sos",
"satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password",
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \""
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_connected_network_environment/Installing_Server_Connected_satellite |
function::fullpath_struct_nameidata | function::fullpath_struct_nameidata Name function::fullpath_struct_nameidata - get the full nameidata path Synopsis Arguments nd Pointer to " struct nameidata " . Description Returns the full dirent name (full path to the root), like the kernel (and systemtap-tapset) d_path function, with a " / " . | [
"fullpath_struct_nameidata(nd:)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-fullpath-struct-nameidata |
Chapter 27. kubernetes | Chapter 27. kubernetes The namespace for Kubernetes-specific metadata Data type group 27.1. kubernetes.pod_name The name of the pod Data type keyword 27.2. kubernetes.pod_id The Kubernetes ID of the pod Data type keyword 27.3. kubernetes.namespace_name The name of the namespace in Kubernetes Data type keyword 27.4. kubernetes.namespace_id The ID of the namespace in Kubernetes Data type keyword 27.5. kubernetes.host The Kubernetes node name Data type keyword 27.6. kubernetes.container_name The name of the container in Kubernetes Data type keyword 27.7. kubernetes.annotations Annotations associated with the Kubernetes object Data type group 27.8. kubernetes.labels Labels present on the original Kubernetes Pod Data type group 27.9. kubernetes.event The Kubernetes event obtained from the Kubernetes master API. This event description loosely follows type Event in Event v1 core . Data type group 27.9.1. kubernetes.event.verb The type of event, ADDED , MODIFIED , or DELETED Data type keyword Example value ADDED 27.9.2. kubernetes.event.metadata Information related to the location and time of the event creation Data type group 27.9.2.1. kubernetes.event.metadata.name The name of the object that triggered the event creation Data type keyword Example value java-mainclass-1.14d888a4cfc24890 27.9.2.2. kubernetes.event.metadata.namespace The name of the namespace where the event originally occurred. Note that it differs from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 27.9.2.3. kubernetes.event.metadata.selfLink A link to the event Data type keyword Example value /api/v1/namespaces/javaj/events/java-mainclass-1.14d888a4cfc24890 27.9.2.4. kubernetes.event.metadata.uid The unique ID of the event Data type keyword Example value d828ac69-7b58-11e7-9cf5-5254002f560c 27.9.2.5. kubernetes.event.metadata.resourceVersion A string that identifies the server's internal version of the event. Clients can use this string to determine when objects have changed. Data type integer Example value 311987 27.9.3. kubernetes.event.involvedObject The object that the event is about. Data type group 27.9.3.1. kubernetes.event.involvedObject.kind The type of object Data type keyword Example value ReplicationController 27.9.3.2. kubernetes.event.involvedObject.namespace The namespace name of the involved object. Note that it may differ from kubernetes.namespace_name , which is the namespace where the eventrouter application is deployed. Data type keyword Example value default 27.9.3.3. kubernetes.event.involvedObject.name The name of the object that triggered the event Data type keyword Example value java-mainclass-1 27.9.3.4. kubernetes.event.involvedObject.uid The unique ID of the object Data type keyword Example value e6bff941-76a8-11e7-8193-5254002f560c 27.9.3.5. kubernetes.event.involvedObject.apiVersion The version of kubernetes master API Data type keyword Example value v1 27.9.3.6. kubernetes.event.involvedObject.resourceVersion A string that identifies the server's internal version of the pod that triggered the event. Clients can use this string to determine when objects have changed. Data type keyword Example value 308882 27.9.4. kubernetes.event.reason A short machine-understandable string that gives the reason for generating this event Data type keyword Example value SuccessfulCreate 27.9.5. kubernetes.event.source_component The component that reported this event Data type keyword Example value replication-controller 27.9.6. kubernetes.event.firstTimestamp The time at which the event was first recorded Data type date Example value 2017-08-07 10:11:57.000000000 Z 27.9.7. kubernetes.event.count The number of times this event has occurred Data type integer Example value 1 27.9.8. kubernetes.event.type The type of event, Normal or Warning . New types could be added in the future. Data type keyword Example value Normal | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/cluster-logging-exported-fields-kubernetes_cluster-logging-exported-fields |
2.3. SELinux Contexts for Users | 2.3. SELinux Contexts for Users Use the following command to view the SELinux context associated with your Linux user: In Red Hat Enterprise Linux, Linux users run unconfined by default. This SELinux context shows that the Linux user is mapped to the SELinux unconfined_u user, running as the unconfined_r role, and is running in the unconfined_t domain. s0-s0 is an MLS range, which in this case, is the same as just s0 . The categories the user has access to is defined by c0.c1023 , which is all categories ( c0 through to c1023 ). | [
"~]USD id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-selinux_contexts-selinux_contexts_for_users |
28.3. Logging to a Remote System During the Installation | 28.3. Logging to a Remote System During the Installation By default, the installation process sends log messages to the console as they are generated. You may specify that these messages go to a remote system that runs a syslog service. To configure remote logging, add the syslog option. Specify the IP address of the logging system, and the UDP port number of the log service on that system. By default, syslog services that accept remote messages listen on UDP port 514. For example, to connect to a syslog service on the system 192.168.1.20 , enter the following at the boot: prompt: 28.3.1. Configuring a Log Server Red Hat Enterprise Linux uses rsyslog to provide a syslog service. The default configuration of rsyslog rejects messages from remote systems. Warning Only enable remote syslog access on secured networks. The rsyslog configuration detailed below does not make use of any of the security measures available in rsyslog Crackers may slow or crash systems that permit access to the logging service, by sending large quantities of false log messages. In addition, hostile users may intercept or falsify messages sent to the logging service over the network. To configure a Red Hat Enterprise Linux system to accept log messages from other systems on the network, edit the file /etc/rsyslog.conf . You must use root privileges to edit the file /etc/rsyslog.conf . Uncomment the following lines by removing the hash preceding them: Restart the rsyslog service to apply the change: Enter the root password when prompted. Note By default, the syslog service listens on UDP port 514. The firewall must be configured to permit connections to this port from other systems. Choose System Administration Firewall . Select Other ports , and Add . Enter 514 in the Port(s) field, and specify udp as the Protocol . | [
"linux syslog= 192.168.1.20:514",
"USDModLoad imudp.so USDUDPServerRun 514",
"su -c '/sbin/service rsyslog restart'"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-remote-logging |
Chapter 3. Setting Up Load Balancer Prerequisites for Keepalived | Chapter 3. Setting Up Load Balancer Prerequisites for Keepalived Load Balancer using keepalived consists of two basic groups: the LVS routers and the real servers. To prevent a single point of failure, each group should have at least two members. The LVS router group should consist of two identical or very similar systems running Red Hat Enterprise Linux. One will act as the active LVS router while the other stays in hot standby mode, so they need to have as close to the same capabilities as possible. Before choosing and configuring the hardware for the real server group, determine which of the three Load Balancer topologies to use. 3.1. The NAT Load Balancer Network The NAT topology allows for great latitude in utilizing existing hardware, but it is limited in its ability to handle large loads because all packets going into and coming out of the pool pass through the Load Balancer router. Network Layout The topology for Load Balancer using NAT routing is the easiest to configure from a network layout perspective because only one access point to the public network is needed. The real servers are on a private network and respond to all requests through the LVS router. Hardware In a NAT topology, each real server only needs one NIC since it will only be responding to the LVS router. The LVS routers, on the other hand, need two NICs each to route traffic between the two networks. Because this topology creates a network bottleneck at the LVS router, Gigabit Ethernet NICs can be employed on each LVS router to increase the bandwidth the LVS routers can handle. If Gigabit Ethernet is employed on the LVS routers, any switch connecting the real servers to the LVS routers must have at least two Gigabit Ethernet ports to handle the load efficiently. Software Because the NAT topology requires the use of iptables for some configurations, there can be a large amount of software configuration outside of Keepalived. In particular, FTP services and the use of firewall marks requires extra manual configuration of the LVS routers to route requests properly. 3.1.1. Configuring Network Interfaces for Load Balancer with NAT To set up Load Balancer with NAT, you must first configure the network interfaces for the public network and the private network on the LVS routers. In this example, the LVS routers' public interfaces ( eth0 ) will be on the 203.0.113.0/24 network and the private interfaces which link to the real servers ( eth1 ) will be on the 10.11.12.0/24 network. Important At the time of writing, the NetworkManager service is not compatible with Load Balancer. In particular, IPv6 VIPs are known not to work when the IPv6 addresses are assigned by SLAAC. For this reason, the examples shown here use configuration files and the network service. On the active or primary LVS router node, the public interface's network configuration file, /etc/sysconfig/network-scripts/ifcfg-eth0 , could look something like this: The configuration file, /etc/sysconfig/network-scripts/ifcfg-eth1 , for the private NAT interface on the LVS router could look something like this: The VIP address must be different to the static address but in the same range. In this example, the VIP for the LVS router's public interface could be configured to be 203.0.113.10 and the VIP for the private interface can be 10.11.12.10. The VIP addresses are set by the virtual_ipaddress option in the /etc/keepalived/keepalived.conf file. For more information, see Section 4.1, "A Basic Keepalived configuration" . Also ensure that the real servers route requests back to the VIP for the NAT interface. Important The sample Ethernet interface configuration settings in this section are for the real IP addresses of an LVS router and not the floating IP addresses. After configuring the primary LVS router node's network interfaces, configure the backup LVS router's real network interfaces (taking care that none of the IP address conflict with any other IP addresses on the network). Important Ensure that each interface on the backup node services the same network as the interface on the primary node. For instance, if eth0 connects to the public network on the primary node, it must also connect to the public network on the backup node. 3.1.2. Routing on the Real Servers The most important thing to remember when configuring the real servers network interfaces in a NAT topology is to set the gateway for the NAT floating IP address of the LVS router. In this example, that address is 10.11.12.10. Note Once the network interfaces are up on the real servers, the machines will be unable to ping or connect in other ways to the public network. This is normal. You will, however, be able to ping the real IP for the LVS router's private interface, in this case 10.11.12.9. The real server's configuration file, /etc/sysconfig/network-scripts/ifcfg-eth0 , file could look similar to this: Warning If a real server has more than one network interface configured with a GATEWAY= line, the first one to come up will get the gateway. Therefore if both eth0 and eth1 are configured and eth1 is used for Load Balancer, the real servers may not route requests properly. It is best to turn off extraneous network interfaces by setting ONBOOT=no in their network configuration files within the /etc/sysconfig/network-scripts/ directory or by making sure the gateway is correctly set in the interface which comes up first. 3.1.3. Enabling NAT Routing on the LVS Routers In a simple NAT Load Balancer configuration where each clustered service uses only one port, like HTTP on port 80, the administrator need only enable packet forwarding on the LVS routers for the requests to be properly routed between the outside world and the real servers. However, more configuration is necessary when the clustered services require more than one port to go to the same real server during a user session. Once forwarding is enabled on the LVS routers and the real servers are set up and have the clustered services running, use keepalived to configure IP information. Warning Do not configure the floating IP for eth0 or eth1 by manually editing network configuration files or using a network configuration tool. Instead, configure them by means of the keepalived.conf file. When finished, start the keepalived service. Once it is up and running, the active LVS router will begin routing requests to the pool of real servers. | [
"DEVICE=eth0 BOOTPROTO=static ONBOOT=yes IPADDR=203.0.113.9 NETMASK=255.255.255.0 GATEWAY=203.0.113.254",
"DEVICE=eth1 BOOTPROTO=static ONBOOT=yes IPADDR=10.11.12.9 NETMASK=255.255.255.0",
"DEVICE=eth0 ONBOOT=yes BOOTPROTO=static IPADDR=10.11.12.1 NETMASK=255.255.255.0 GATEWAY=10.11.12.10"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/ch-lvs-setup-prereqs-vsa |
5.3.2.2. SCSI | 5.3.2.2. SCSI Formally known as the Small Computer System Interface, SCSI as it is known today originated in the early 80s and was declared a standard in 1986. Like ATA, SCSI makes use of a bus topology. However, there the similarities end. Using a bus topology means that every device on the bus must be uniquely identified somehow. While ATA supports only two different devices for each bus and gives each one a specific name, SCSI does this by assigning each device on a SCSI bus a unique numeric address or SCSI ID . Each device on a SCSI bus must be configured (usually by jumpers or switches [17] ) to respond to its SCSI ID. Before continuing any further in this discussion, it is important to note that the SCSI standard does not represent a single interface, but a family of interfaces. There are several areas in which SCSI varies: Bus width Bus speed Electrical characteristics The original SCSI standard described a bus topology in which eight lines in the bus were used for data transfer. This meant that the first SCSI devices could transfer data one byte at a time. In later years, the standard was expanded to permit implementations where sixteen lines could be used, doubling the amount of data that devices could transfer. The original "8-bit" SCSI implementations were then referred to as narrow SCSI, while the newer 16-bit implementations were known as wide SCSI. Originally, the bus speed for SCSI was set to 5MHz, permitting a 5MB/second transfer rate on the original 8-bit SCSI bus. However, subsequent revisions to the standard doubled that speed to 10MHz, resulting in 10MB/second for narrow SCSI and 20MB/second for wide SCSI. As with the bus width, the changes in bus speed received new names, with the 10MHz bus speed being termed fast . Subsequent enhancements pushed bus speeds to ultra (20MHz), fast-40 (40MHz), and fast-80 [18] . Further increases in transfer rates lead to several different versions of the ultra160 bus speed. By combining these terms, various SCSI configurations can be concisely named. For example, "ultra-wide SCSI" refers to a 16-bit SCSI bus running at 20MHz. The original SCSI standard used single-ended signaling; this is an electrical configuration where only one conductor is used to pass an electrical signal. Later implementations also permitted the use of differential signaling, where two conductors are used to pass a signal. Differential SCSI (which was later renamed to high voltage differential or HVD SCSI) had the benefit of reduced sensitivity to electrical noise and allowed longer cable lengths, but it never became popular in the mainstream computer market. A later implementation, known as low voltage differential (LVD), has finally broken through to the mainstream and is a requirement for the higher bus speeds. The width of a SCSI bus not only dictates the amount of data that can be transferred with each clock cycle, but it also determines how many devices can be connected to a bus. Regular SCSI supports 8 uniquely-addressed devices, while wide SCSI supports 16. In either case, you must make sure that all devices are set to use a unique SCSI ID. Two devices sharing a single ID causes problems that could lead to data corruption. One other thing to keep in mind is that every device on the bus uses an ID. This includes the SCSI controller. Quite often system administrators forget this and unwittingly set a device to use the same SCSI ID as the bus's controller. This also means that, in practice, only 7 (or 15, for wide SCSI) devices may be present on a single bus, as each bus must reserve an ID for the controller. Note Most SCSI implementations include some means of scanning the SCSI bus; this is often used to confirm that all the devices are properly configured. If a bus scan returns the same device for every single SCSI ID, that device has been incorrectly set to the same SCSI ID as the SCSI controller. To resolve the problem, reconfigure the device to use a different (and unique) SCSI ID. Because of SCSI's bus-oriented architecture, it is necessary to properly terminate both ends of the bus. Termination is accomplished by placing a load of the correct electrical impedance on each conductor comprising the SCSI bus. Termination is an electrical requirement; without it, the various signals present on the bus would be reflected off the ends of the bus, garbling all communication. Many (but not all) SCSI devices come with internal terminators that can be enabled or disabled using jumpers or switches. External terminators are also available. One last thing to keep in mind about SCSI -- it is not just an interface standard for mass storage devices. Many other devices (such as scanners, printers, and communications devices) use SCSI. Although these are much less common than SCSI mass storage devices, they do exist. However, it is likely that, with the advent of USB and IEEE-1394 (often called Firewire), these interfaces will be used more for these types of devices in the future. Note The USB and IEEE-1394 interfaces are also starting to make inroads in the mass storage arena; however, no native USB or IEEE-1394 mass-storage devices currently exist. Instead, the present-day offerings are based on ATA or SCSI devices with external conversion circuitry. No matter what interface a mass storage device uses, the inner workings of the device has a bearing on its performance. The following section explores this important subject. [17] Some storage hardware (usually those that incorporate removable drive "carriers") is designed so that the act of plugging a module into place automatically sets the SCSI ID to an appropriate value. [18] Fast-80 is not technically a change in bus speed; instead the 40MHz bus was retained, but data was clocked at both the rising and falling of each clock pulse, effectively doubling the throughput. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-interface-standard-scsi |
21.3. Creating Netgroups | 21.3. Creating Netgroups 21.3.1. Adding a Netgroup To add a Netgroup, you can use: the IdM web UI (see the section called "Web UI: Adding a Netgroup" ) the command line (see the section called "Command Line: Adding a Netgroup" ) Web UI: Adding a Netgroup Select Identity Groups Netgroups Click Add . Enter a unique name and, optionally, a description. The group name is the identifier used for the netgroup in the IdM domain. You cannot change it later. Click Add and Edit to save the changes and to start editing the entry. The default NIS domain is set to the IdM domain name. Optionally, you can enter the name of the alternative NIS domain in the NIS domain name field. Figure 21.1. Netgroup Tab The NIS domain name field sets the domain that appears in the netgroup triple. It does not affect which NIS domain the Identity Management NIS listener responds to. Add members, as described in the section called "Web UI: Adding Members to a Netgroup" . Click Save . Command Line: Adding a Netgroup You can add a new netgroup using the ipa netgroup-add command. Specify: the group name. optionally, a description. optionally, the NIS domain name if it is different than the IdM domain name. Note The --nisdomain option sets the domain that appears in the netgroup triple. It does not affect which NIS domain the Identity Management listener responds to. For example: To add members to the netgroup, see the section called "Command Line: Adding Members to a Netgroup" . 21.3.2. Adding Members to a Netgroup Beside users and hosts, netgroups can contain user groups, host groups, and other netgroups (nested groups) as members. Depending on the size of a group, it can take up to several minutes after you create a nested groups for the members of the child group to show up as members of the parent group. To add members to a Netgroup, you can use: the IdM web UI (see the section called "Web UI: Adding Members to a Netgroup" ) the command line (see the section called "Command Line: Adding Members to a Netgroup" ) Warning Do not create recursive nested groups. For example, if GroupA is a member of GroupB , do not add GroupB as a member of GroupA . Recursive groups are not supported and can cause unpredictable behavior. Web UI: Adding Members to a Netgroup To add members to a netgroup using the Web UI: Select Identity Groups Netgroups Click the name of the netgroup to which to add members. Click Add to the required member type. Figure 21.2. User Menu in the Netgroup Tab Select the members you want to add, and click > to confirm. Figure 21.3. Add User Menu in the Netgroup Tab Click Add . Command Line: Adding Members to a Netgroup After you created the netgroup, you can add members using the ipa netgroup-add-member command: To set more than one member, use a comma-separated list inside a set of curly braces. For example: | [
"ipa netgroup-add --desc=\"Netgroup description\" --nisdomain=\"example.com\" example-netgroup",
"ipa netgroup-add-member --users= user_name --groups= group_name --hosts= host_name --hostgroups= host_group_name --netgroups= netgroup_name group_nameame",
"ipa netgroup-add-member --users={user1;user2,user3} --groups={group1,group2} example-group"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/adding-netgroups |
Chapter 1. Working with systemd unit files | Chapter 1. Working with systemd unit files The systemd unit files represent your system resources. As a system administrator, you can perform the following advanced tasks: Create custom unit files Modify existing unit files Work with instantiated units 1.1. Introduction to unit files A unit file contains configuration directives that describe the unit and define its behavior. Several systemctl commands work with unit files in the background. To make finer adjustments, you can edit or create unit files manually. You can find three main directories where unit files are stored on the system, the /etc/systemd/system/ directory is reserved for unit files created or customized by the system administrator. Unit file names take the following form: Here, unit_name stands for the name of the unit and type_extension identifies the unit type. For example, you can find an sshd.service as well as an sshd.socket unit present on your system. Unit files can be supplemented with a directory for additional configuration files. For example, to add custom configuration options to sshd.service , create the sshd.service.d/custom.conf file and insert additional directives there. For more information on configuration directories, see Modifying existing unit files . The systemd system and service manager can also create the sshd.service.wants/ and sshd.service.requires/ directories. These directories contain symbolic links to unit files that are dependencies of the sshd service. systemd creates the symbolic links automatically either during installation according to [Install] unit file options or at runtime based on [Unit] options. You can also create these directories and symbolic links manually. Also, the sshd.service.wants/ and sshd.service.requires/ directories can be created. These directories contain symbolic links to unit files that are dependencies of the sshd service. The symbolic links are automatically created either during installation according to [Install] unit file options or at runtime based on [Unit] options. It is also possible to create these directories and symbolic links manually. For more details on [Install] and [Unit] options, see the tables below. Many unit file options can be set using the so called unit specifiers - wildcard strings that are dynamically replaced with unit parameters when the unit file is loaded. This enables creation of generic unit files that serve as templates for generating instantiated units. See Working with instantiated units . 1.2. Systemd unit files locations You can find the unit configuration files in one of the following directories: Table 1.1. systemd unit files locations Directory Description /usr/lib/systemd/system/ systemd unit files distributed with installed RPM packages. /run/systemd/system/ systemd unit files created at run time. This directory takes precedence over the directory with installed service unit files. /etc/systemd/system/ systemd unit files created by using the systemctl enable command as well as unit files added for extending a service. This directory takes precedence over the directory with runtime unit files. The default configuration of systemd is defined during the compilation and you can find the configuration in the /etc/systemd/system.conf file. By editing this file, you can modify the default configuration by overriding values for systemd units globally. For example, to override the default value of the timeout limit, which is set to 90 seconds, use the DefaultTimeoutStartSec parameter to input the required value in seconds. 1.3. Unit file structure Unit files typically consist of three following sections: The [Unit] section Contains generic options that are not dependent on the type of the unit. These options provide unit description, specify the unit's behavior, and set dependencies to other units. For a list of most frequently used [Unit] options, see Important [Unit] section options . The [Unit type] section Contains type-specific directives, these are grouped under a section named after the unit type. For example, service unit files contain the [Service] section. The [Install] section Contains information about unit installation used by systemctl enable and disable commands. For a list of options for the [Install] section, see Important [Install] section options . Additional resources Important [Unit] section options Important [Service] section options Important [Install] section options 1.4. Important [Unit] section options The following tables lists important options of the [Unit] section. Table 1.2. Important [Unit] section options Option [a] Description Description A meaningful description of the unit. This text is displayed for example in the output of the systemctl status command. Documentation Provides a list of URIs referencing documentation for the unit. After [b] Defines the order in which units are started. The unit starts only after the units specified in After are active. Unlike Requires , After does not explicitly activate the specified units. The Before option has the opposite functionality to After . Requires Configures dependencies on other units. The units listed in Requires are activated together with the unit. If any of the required units fail to start, the unit is not activated. Wants Configures weaker dependencies than Requires . If any of the listed units does not start successfully, it has no impact on the unit activation. This is the recommended way to establish custom unit dependencies. Conflicts Configures negative dependencies, an opposite to Requires . [a] For a complete list of options configurable in the [Unit] section, see the systemd.unit(5) manual page. [b] In most cases, it is sufficient to set only the ordering dependencies with After and Before unit file options. If you also set a requirement dependency with Wants (recommended) or Requires , the ordering dependency still needs to be specified. That is because ordering and requirement dependencies work independently from each other. 1.5. Important [Service] section options The following tables lists important options of the [Service] section. Table 1.3. Important [Service] section options Option [a] Description Type Configures the unit process startup type that affects the functionality of ExecStart and related options. One of: * simple - The default value. The process started with ExecStart is the main process of the service. * forking - The process started with ExecStart spawns a child process that becomes the main process of the service. The parent process exits when the startup is complete. * oneshot - This type is similar to simple , but the process exits before starting consequent units. * dbus - This type is similar to simple , but consequent units are started only after the main process gains a D-Bus name. * notify - This type is similar to simple , but consequent units are started only after a notification message is sent via the sd_notify() function. * idle - similar to simple , the actual execution of the service binary is delayed until all jobs are finished, which avoids mixing the status output with shell output of services. ExecStart Specifies commands or scripts to be executed when the unit is started. ExecStartPre and ExecStartPost specify custom commands to be executed before and after ExecStart . Type=oneshot enables specifying multiple custom commands that are then executed sequentially. ExecStop Specifies commands or scripts to be executed when the unit is stopped. ExecReload Specifies commands or scripts to be executed when the unit is reloaded. Restart With this option enabled, the service is restarted after its process exits, with the exception of a clean stop by the systemctl command. RemainAfterExit If set to True, the service is considered active even when all its processes exited. Default value is False. This option is especially useful if Type=oneshot is configured. [a] For a complete list of options configurable in the [Service] section, see the systemd.service(5) manual page. 1.6. Important [Install] section options The following tables lists important options of the [Install] section. Table 1.4. Important [Install] section options Option [a] Description Alias Provides a space-separated list of additional names for the unit. Most systemctl commands, excluding systemctl enable , can use aliases instead of the actual unit name. RequiredBy A list of units that depend on the unit. When this unit is enabled, the units listed in RequiredBy gain a Require dependency on the unit. WantedBy A list of units that weakly depend on the unit. When this unit is enabled, the units listed in WantedBy gain a Want dependency on the unit. Also Specifies a list of units to be installed or uninstalled along with the unit. DefaultInstance Limited to instantiated units, this option specifies the default instance for which the unit is enabled. See Working with instantiated units . [a] For a complete list of options configurable in the [Install] section, see the systemd.unit(5) manual page. 1.7. Creating custom unit files There are several use cases for creating unit files from scratch: you could run a custom daemon, create a second instance of some existing service as in Creating a custom unit file by using the second instance of the sshd service On the other hand, if you intend just to modify or extend the behavior of an existing unit, use the instructions from Modifying existing unit files . Procedure To create a custom service, prepare the executable file with the service. The file can contain a custom-created script, or an executable delivered by a software provider. If required, prepare a PID file to hold a constant PID for the main process of the custom service. You can also include environment files to store shell variables for the service. Make sure the source script is executable (by executing the chmod a+x ) and is not interactive. Create a unit file in the /etc/systemd/system/ directory and make sure it has correct file permissions. Execute as root : Replace <name> with a name of the service you want to created. Note that the file does not need to be executable. Open the created <name> .service file and add the service configuration options. You can use various options depending on the type of service you wish to create, see Unit file structure . The following is an example unit configuration for a network-related service: <service_description> is an informative description that is displayed in journal log files and in the output of the systemctl status command. the After setting ensures that the service is started only after the network is running. Add a space-separated list of other relevant services or targets. path_to_executable stands for the path to the actual service executable. Type=forking is used for daemons that make the fork system call. The main process of the service is created with the PID specified in path_to_pidfile . Find other startup types in Important [Service] section options . WantedBy states the target or targets that the service should be started under. Think of these targets as of a replacement of the older concept of runlevels. Notify systemd that a new <name> .service file exists: Warning Always execute the systemctl daemon-reload command after creating new unit files or modifying existing unit files. Otherwise, the systemctl start or systemctl enable commands could fail due to a mismatch between states of systemd and actual service unit files on disk. Note, that on systems with a large number of units this can take a long time, as the state of each unit has to be serialized and subsequently deserialized during the reload. 1.8. Creating a custom unit file by using the second instance of the sshd service If you need to configure and run multiple instances of a service, you can create copies of the original service configuration files and modifying certain parameters to avoid conflicts with the primary instance of the service. Procedure To create a second instance of the sshd service: Create a copy of the sshd_config file that the second daemon will use: Edit the sshd-second_config file created in the step to assign a different port number and PID file to the second daemon: See the sshd_config (5) manual page for more information about Port and PidFile options. Make sure the port you choose is not in use by any other service. The PID file does not have to exist before running the service, it is generated automatically on service start. Create a copy of the systemd unit file for the sshd service: Alter the created sshd-second.service : Modify the Description option: Add sshd.service to services specified in the After option, so that the second instance starts only after the first one has already started: Remove the ExecStartPre=/usr/sbin/sshd-keygen line, the first instance of sshd includes key generation. Add the -f /etc/ssh/sshd-second_config parameter to the sshd command, so that the alternative configuration file is used: After the modifications, the sshd-second.service unit file contains the following settings: If using SELinux, add the port for the second instance of sshd to SSH ports, otherwise the second instance of sshd will be rejected to bind to the port: Enable sshd-second.service to start automatically on boot: Verify if the sshd-second.service is running by using the systemctl status command. Verify if the port is enabled correctly by connecting to the service: Make sure you configure firewall to allow connections to the second instance of sshd . 1.9. Finding the systemd service description You can find descriptive information about the script on the line starting with #description . Use this description together with the service name in the Description option in the [Unit] section of the unit file. The header might contain similar data on the #Short-Description and #Description lines. 1.10. Finding the systemd service dependencies The Linux standard base (LSB) header might contain several directives that form dependencies between services. Most of them are translatable to systemd unit options, see the following table: Table 1.5. Dependency options from the LSB header LSB Option Description Unit File Equivalent Provides Specifies the boot facility name of the service, that can be referenced in other init scripts (with the "USD" prefix). This is no longer needed as unit files refer to other units by their file names. - Required-Start Contains boot facility names of required services. This is translated as an ordering dependency, boot facility names are replaced with unit file names of corresponding services or targets they belong to. For example, in case of postfix , the Required-Start dependency on USDnetwork was translated to the After dependency on network.target. After , Before Should-Start Constitutes weaker dependencies than Required-Start. Failed Should-Start dependencies do not affect the service startup. After , Before Required-Stop , Should-Stop Constitute negative dependencies. Conflicts 1.11. Finding default targets of the service The line starting with #chkconfig contains three numerical values. The most important is the first number that represents the default runlevels in which the service is started. Map these runlevels to equivalent systemd targets. Then list these targets in the WantedBy option in the [Install] section of the unit file. For example, postfix was previously started in runlevels 2, 3, 4, and 5, which translates to multi-user.target and graphical.target. Note that the graphical.target depends on multiuser.target, therefore it is not necessary to specify both. You might find information about default and forbidden runlevels also at #Default-Start and #Default-Stop lines in the LSB header. The other two values specified on the #chkconfig line represent startup and shutdown priorities of the init script. These values are interpreted by systemd if it loads the init script, but there is no unit file equivalent. 1.12. Finding files used by the service Init scripts require loading a function library from a dedicated directory and allow importing configuration, environment, and PID files. Environment variables are specified on the line starting with #config in the init script header, which translates to the EnvironmentFile unit file option. The PID file specified on the #pidfile init script line is imported to the unit file with the PIDFile option. The key information that is not included in the init script header is the path to the service executable, and potentially some other files required by the service. In versions of Red Hat Enterprise Linux, init scripts used a Bash case statement to define the behavior of the service on default actions, such as start , stop , or restart , as well as custom-defined actions. The following excerpt from the postfix init script shows the block of code to be executed at service start. The extensibility of the init script allowed specifying two custom functions, conf_check() and make_aliasesdb() , that are called from the start() function block. On closer look, several external files and directories are mentioned in the above code: the main service executable /usr/sbin/postfix , the /etc/postfix/ and /var/spool/postfix/ configuration directories, as well as the /usr/sbin/postconf/ directory. systemd supports only the predefined actions, but enables executing custom executables with ExecStart , ExecStartPre , ExecStartPost , ExecStop , and ExecReload options. The /usr/sbin/postfix together with supporting scripts are executed on service start. Converting complex init scripts requires understanding the purpose of every statement in the script. Some of the statements are specific to the operating system version, therefore you do not need to translate them. On the other hand, some adjustments might be needed in the new environment, both in unit file as well as in the service executable and supporting files. 1.13. Modifying existing unit files If you want to modify existing unit files proceed to the /etc/systemd/system/ directory. Note that you should not modify the your the default unit files, which your system stores in the /usr/lib/systemd/system/ directory. Procedure Depending on the extent of the required changes, pick one of the following approaches: Create a directory for supplementary configuration files at /etc/systemd/system/ <unit> .d/ . This method is recommended for most use cases. You can extend the default configuration with additional functionality, while still referring to the original unit file. Changes to the default unit introduced with a package upgrade are therefore applied automatically. See Extending the default unit configuration for more information. Create a copy of the original unit file from /usr/lib/systemd/system/`directory in the `/etc/systemd/system/ directory and make changes there. The copy overrides the original file, therefore changes introduced with the package update are not applied. This method is useful for making significant unit changes that should persist regardless of package updates. See Overriding the default unit configuration for details. To return to the default configuration of the unit, delete custom-created configuration files in the /etc/systemd/system/ directory. Apply changes to unit files without rebooting the system: The daemon-reload option reloads all unit files and recreates the entire dependency tree, which is needed to immediately apply any change to a unit file. As an alternative, you can achieve the same result with the following command: If the modified unit file belongs to a running service, restart the service: Important To modify properties, such as dependencies or timeouts, of a service that is handled by a SysV initscript, do not modify the initscript itself. Instead, create a systemd drop-in configuration file for the service as described in: Extending the default unit configuration and Overriding the default unit configuration . Then manage this service in the same way as a normal systemd service. For example, to extend the configuration of the network service, do not modify the /etc/rc.d/init.d/network initscript file. Instead, create new directory /etc/systemd/system/network.service.d/ and a systemd drop-in file /etc/systemd/system/network.service.d/ my_config .conf . Then, put the modified values into the drop-in file. Note: systemd knows the network service as network.service , which is why the created directory must be called network.service.d 1.14. Extending the default unit configuration You can extend the default unit file with additional systemd configuration options. Procedure Create a configuration directory in /etc/systemd/system/ : Replace <name> with the name of the service you want to extend. The syntax applies to all unit types. Create a configuration file with the .conf suffix: Replace <config_name> with the name of the configuration file. This file adheres to the normal unit file structure and you have to specify all directives in the appropriate sections, see Unit file structure . For example, to add a custom dependency, create a configuration file with the following content: The <new_dependency> stands for the unit to be marked as a dependency. Another example is a configuration file that restarts the service after its main process exited, with a delay of 30 seconds: Create small configuration files focused only on one task. Such files can be easily moved or linked to configuration directories of other services. Apply changes to the unit: Example 1.1. Extending the httpd.service configuration To modify the httpd.service unit so that a custom shell script is automatically executed when starting the Apache service, perform the following steps. Create a directory and a custom configuration file: Specify the script you want to execute after the main service process by inserting the following text to the custom_script.conf file: Apply the unit changes:: Note The configuration files from the /etc/systemd/system/ configuration directories take precedence over unit files in /usr/lib/systemd/system/ . Therefore, if the configuration files contain an option that can be specified only once, such as Description or ExecStart , the default value of this option is overridden. Note that in the output of the systemd-delta command, described in Monitoring overridden units ,such units are always marked as [EXTENDED], even though in sum, certain options are actually overridden. 1.15. Overriding the default unit configuration You can make changes to the unit file configuration that will persist after updating the package that provides the unit file. Procedure Copy the unit file to the /etc/systemd/system/ directory by entering the following command as root : Open the copied file with a text editor, and make changes. Apply unit changes: 1.16. Changing the timeout limit You can specify a timeout value per service to prevent a malfunctioning service from freezing the system. Otherwise, the default value for timeout is 90 seconds for normal services and 300 seconds for SysV-compatible services. Procedure To extend timeout limit for the httpd service: Copy the httpd unit file to the /etc/systemd/system/ directory: Open the /etc/systemd/system/httpd.service file and specify the TimeoutStartUSec value in the [Service] section: Reload the systemd daemon: Optional. Verify the new timeout value: Note To change the timeout limit globally, input the DefaultTimeoutStartSec in the /etc/systemd/system.conf file. 1.17. Monitoring overridden units You can display an overview of overridden or modified unit files by using the systemd-delta command. Procedure Display an overview of overridden or modified unit files: For example, the output of the command can look as follows: 1.18. Working with instantiated units You can manage multiple instances of a service by using a single template configuration. You can define a generic template for a unit and generate multiple instances of that unit with specific parameters at runtime. The template is indicated by the at sign (@). Instantiated units can be started from another unit file (using Requires or Wants options), or with the systemctl start command. Instantiated service units are named the following way: The <template_name> stands for the name of the template configuration file. Replace <instance_name> with the name for the unit instance. Several instances can point to the same template file with configuration options common for all instances of the unit. Template unit name has the form of: For example, the following Wants setting in a unit file: first makes systemd search for given service units. If no such units are found, the part between "@" and the type suffix is ignored and systemd searches for the [email protected] file, reads the configuration from it, and starts the services. For example, the [email protected] template contains the following directives: When the [email protected] and [email protected] are instantiated from the above template, Description = is resolved as Getty on ttyA and Getty on ttyB . 1.19. Important unit specifiers You can use the wildcard characters, called unit specifiers , in any unit configuration file. Unit specifiers substitute certain unit parameters and are interpreted at runtime. Table 1.6. Important unit specifiers Unit Specifier Meaning Description %n Full unit name Stands for the full unit name including the type suffix. %N has the same meaning but also replaces the forbidden characters with ASCII codes. %p Prefix name Stands for a unit name with type suffix removed. For instantiated units %p stands for the part of the unit name before the "@" character. %i Instance name Is the part of the instantiated unit name between the "@" character and the type suffix. %I has the same meaning but also replaces the forbidden characters for ASCII codes. %H Host name Stands for the hostname of the running system at the point in time the unit configuration is loaded. %t Runtime directory Represents the runtime directory, which is either /run for the root user, or the value of the XDG_RUNTIME_DIR variable for unprivileged users. For a complete list of unit specifiers, see the systemd.unit(5) manual page. 1.20. Additional resources How to set limits for services in RHEL and systemd How to write a service unit file which enforces that particular services have to be started How to decide what dependencies a systemd service unit definition should have | [
"<unit_name> . <type_extension>",
"DefaultTimeoutStartSec= required value",
"touch /etc/systemd/system/ <name> .service chmod 664 /etc/systemd/system/ <name> .service",
"[Unit] Description= <service_description> After=network.target [Service] ExecStart= <path_to_executable> Type=forking PIDFile= <path_to_pidfile> [Install] WantedBy=default.target",
"systemctl daemon-reload systemctl start <name> .service",
"cp /etc/ssh/sshd{,-second}_config",
"Port 22220 PidFile /var/run/sshd-second.pid",
"cp /usr/lib/systemd/system/sshd.service /etc/systemd/system/sshd-second.service",
"Description=OpenSSH server second instance daemon",
"After=syslog.target network.target auditd.service sshd.service",
"ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config USDOPTIONS",
"[Unit] Description=OpenSSH server second instance daemon After=syslog.target network.target auditd.service sshd.service [Service] EnvironmentFile=/etc/sysconfig/sshd ExecStart=/usr/sbin/sshd -D -f /etc/ssh/sshd-second_config USDOPTIONS ExecReload=/bin/kill -HUP USDMAINPID KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target",
"semanage port -a -t ssh_port_t -p tcp 22220",
"systemctl enable sshd-second.service",
"ssh -p 22220 user@server",
"conf_check() { [ -x /usr/sbin/postfix ] || exit 5 [ -d /etc/postfix ] || exit 6 [ -d /var/spool/postfix ] || exit 5 } make_aliasesdb() { if [ \"USD(/usr/sbin/postconf -h alias_database)\" == \"hash:/etc/aliases\" ] then # /etc/aliases.db might be used by other MTA, make sure nothing # has touched it since our last newaliases call [ /etc/aliases -nt /etc/aliases.db ] || [ \"USDALIASESDB_STAMP\" -nt /etc/aliases.db ] || [ \"USDALIASESDB_STAMP\" -ot /etc/aliases.db ] || return /usr/bin/newaliases touch -r /etc/aliases.db \"USDALIASESDB_STAMP\" else /usr/bin/newaliases fi } start() { [ \"USDEUID\" != \"0\" ] && exit 4 # Check that networking is up. [ USD{NETWORKING} = \"no\" ] && exit 1 conf_check # Start daemons. echo -n USD\"Starting postfix: \" make_aliasesdb >/dev/null 2>&1 [ -x USDCHROOT_UPDATE ] && USDCHROOT_UPDATE /usr/sbin/postfix start 2>/dev/null 1>&2 && success || failure USD\"USDprog start\" RETVAL=USD? [ USDRETVAL -eq 0 ] && touch USDlockfile echo return USDRETVAL }",
"systemctl daemon-reload",
"init q",
"systemctl restart <name> .service",
"mkdir /etc/systemd/system/ <name> .service.d/",
"touch /etc/systemd/system/name.service.d/ <config_name> .conf",
"[Unit] Requires= <new_dependency> After= <new_dependency>",
"[Service] Restart=always RestartSec=30",
"systemctl daemon-reload systemctl restart <name> .service",
"mkdir /etc/systemd/system/httpd.service.d/",
"touch /etc/systemd/system/httpd.service.d/custom_script.conf",
"[Service] ExecStartPost=/usr/local/bin/custom.sh",
"systemctl daemon-reload",
"systemctl restart httpd.service",
"cp /usr/lib/systemd/system/ <name> .service /etc/systemd/system/ <name> .service",
"systemctl daemon-reload systemctl restart <name> .service",
"cp /usr/lib/systemd/system/httpd.service /etc/systemd/system/httpd.service",
"[Service] PrivateTmp=true TimeoutStartSec=10 [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl show httpd -p TimeoutStartUSec",
"systemd-delta",
"[EQUIVALENT] /etc/systemd/system/default.target /usr/lib/systemd/system/default.target [OVERRIDDEN] /etc/systemd/system/autofs.service /usr/lib/systemd/system/autofs.service --- /usr/lib/systemd/system/autofs.service 2014-10-16 21:30:39.000000000 -0400 +++ /etc/systemd/system/autofs.service 2014-11-21 10:00:58.513568275 -0500 @@ -8,7 +8,8 @@ EnvironmentFile=-/etc/sysconfig/autofs ExecStart=/usr/sbin/automount USDOPTIONS --pid-file /run/autofs.pid ExecReload=/usr/bin/kill -HUP USDMAINPID -TimeoutSec=180 +TimeoutSec=240 +Restart=Always [Install] WantedBy=multi-user.target [MASKED] /etc/systemd/system/cups.service /usr/lib/systemd/system/cups.service [EXTENDED] /usr/lib/systemd/system/sssd.service /etc/systemd/system/sssd.service.d/journal.conf 4 overridden configuration files found.",
"<template_name> @ <instance_name> .service",
"<unit_name> @.service",
"[email protected] [email protected]",
"[Unit] Description=Getty on %I [Service] ExecStart=-/sbin/agetty --noclear %I USDTERM"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_systemd_unit_files_to_customize_and_optimize_your_system/assembly_working-with-systemd-unit-files_working-with-systemd |
Chapter 5. Ansible-based overcloud registration | Chapter 5. Ansible-based overcloud registration The director uses Ansible-based methods to register overcloud nodes to the Red Hat Customer Portal or a Red Hat Satellite 6 server. 5.1. Red Hat Subscription Manager (RHSM) composable service The rhsm composable service provides a method to register overcloud nodes through Ansible. Each role in the default roles_data file contains a OS::TripleO::Services::Rhsm resource, which is disabled by default. To enable the service, register the resource to the rhsm composable service file: The rhsm composable service accepts a RhsmVars parameter, which allows you to define multiple sub-parameters relevant to your registration. For example: You can also use the RhsmVars parameter in combination with role-specific parameters (e.g. ControllerParameters ) to provide flexibility when enabling specific repositories for different nodes types. The section is a list of sub-parameters available to use with the RhsmVars parameter for use with the rhsm composable service. 5.2. RhsmVars sub-parameters See the role documentation to learn about all Ansible parameters. rhsm Description rhsm_method Choose the registration method. Either portal , satellite , or disable . rhsm_org_id The organization to use for registration. To locate this ID, run sudo subscription-manager orgs from the undercloud node. Enter your Red Hat credentials when prompted, and use the resulting Key value. rhsm_pool_ids The subscription pool ID to use. Use this if not auto-attaching subscriptions. To locate this ID, run sudo subscription-manager list --available --all --matches="*OpenStack*" from the undercloud node, and use the resulting Pool ID value. rhsm_activation_key The activation key to use for registration. Does not work when rhsm_repos is configured. rhsm_autosubscribe Automatically attach compatible subscriptions to this system. Set to true to enable. rhsm_baseurl The base URL for obtaining content. The default is the Red Hat Content Delivery Network URL. If using a Satellite server, change this value to the base URL of your Satellite server content repositories. rhsm_server_hostname The hostname of the subscription management service for registration. The default is the Red Hat Subscription Management hostname. If using a Satellite server, change this value to your Satellite server hostname. rhsm_repos A list of repositories to enable. Does not work when rhsm_activation_key is configured. rhsm_username The username for registration. If possible, use activation keys for registration. rhsm_password The password for registration. If possible, use activation keys for registration. rhsm_rhsm_proxy_hostname The hostname for the HTTP proxy. For example: proxy.example.com . rhsm_rhsm_proxy_port The port for HTTP proxy communication. For example: 8080 . rhsm_rhsm_proxy_user The username to access the HTTP proxy. rhsm_rhsm_proxy_password The password to access the HTTP proxy. Now that you have an understanding of how the rhsm composable service works and how to configure it, you can use the following procedures to configure your own registration details. 5.3. Registering the overcloud with the rhsm composable service Use the following procedure to create an environment file that enables and configures the rhsm composable service. The director uses this environment file to register and subscribe your nodes. Procedure Create an environment file ( templates/rhsm.yml ) to store the configuration. Include your configuration in the environment file. For example: The resource_registry associates the rhsm composable service with the OS::TripleO::Services::Rhsm resource, which is available on each role. The RhsmVars variable passes parameters to Ansible for configuring your Red Hat registration. Save the environment file. You can also provide registration details to specific overcloud roles. The section provides an example of this. 5.4. Applying the rhsm composable service to different roles You can apply the rhsm composable service on a per-role basis. For example, you can apply different sets of configurations to Controller nodes, Compute nodes, and Ceph Storage nodes. Procedure Create an environment file ( templates/rhsm.yml ) to store the configuration. Include your configuration in the environment file. For example: The resource_registry associates the rhsm composable service with the OS::TripleO::Services::Rhsm resource, which is available on each role. The ControllerParameters , ComputeParameters , and CephStorageParameters use their own RhsmVars parameter to pass subscription details to their respective roles. Note Set the RhsmVars parameter within the CephStorageParameters parameter to use a Red Hat Ceph Storage subscription and repositories specific to Ceph Storage. Ensure the rhsm_repos parameter contains the standard Red Hat Enterprise Linux repositories instead of the Extended Update Support (EUS) repositories that Controller and Compute nodes require. Save the environment file. 5.5. Registering the overcloud to Red Hat Satellite Use the following procedure to create an environment file that enables and configures the rhsm composable service to register nodes to Red Hat Satellite instead of the Red Hat Customer Portal. Procedure Create an environment file ( templates/rhsm.yml ) to store the configuration. Include your configuration in the environment file. For example: The resource_registry associates the rhsm composable service with the OS::TripleO::Services::Rhsm resource, which is available on each role. The RhsmVars variable passes parameters to Ansible for configuring your Red Hat registration. Save the environment file. These procedures enable and configure rhsm on the overcloud. However, if you used the rhel-registration method from Red Hat OpenStack Platform version, you must disable it and switch to the Ansible-based method. Use the following procedure to switch from the old rhel-registration method to the Ansible-based method. 5.6. Switching to the rhsm composable service The rhel-registration method runs a bash script to handle the overcloud registration. The scripts and environment files for this method are located in the core Heat template collection at /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/ . Complete the following steps to switch from the rhel-registration method to the rhsm composable service. Procedure Exclude the rhel-registration environment files from future deployments operations. In most cases, exclude the following files: rhel-registration/environment-rhel-registration.yaml rhel-registration/rhel-registration-resource-registry.yaml If you use a custom roles_data file, ensure that each role in your roles_data file contains the OS::TripleO::Services::Rhsm composable service. For example: Add the environment file for rhsm composable service parameters to future deployment operations. This method replaces the rhel-registration parameters with the rhsm service parameters and changes the Heat resource that enables the service from: To: You can also include the /usr/share/openstack-tripleo-heat-templates/environments/rhsm.yaml environment file with your deployment to enable the service. To help transition your details from the rhel-registration method to the rhsm method, use the following table to map the your parameters and their values. 5.7. rhel-registration to rhsm mappings rhel-registration rhsm / RhsmVars rhel_reg_method rhsm_method rhel_reg_org rhsm_org_id rhel_reg_pool_id rhsm_pool_ids rhel_reg_activation_key rhsm_activation_key rhel_reg_auto_attach rhsm_autosubscribe rhel_reg_sat_url rhsm_satellite_url rhel_reg_repos rhsm_repos rhel_reg_user rhsm_username rhel_reg_password rhsm_password rhel_reg_http_proxy_host rhsm_rhsm_proxy_hostname rhel_reg_http_proxy_port rhsm_rhsm_proxy_port rhel_reg_http_proxy_username rhsm_rhsm_proxy_user rhel_reg_http_proxy_password rhsm_rhsm_proxy_password Now that you have configured the environment file for the rhsm service, you can include it with your overcloud deployment operation. 5.8. Deploying the overcloud with the rhsm composable service This section shows how to apply your rhsm configuration to the overcloud. Procedure Include rhsm.yml environment file with the openstack overcloud deploy : This enables the Ansible configuration of the overcloud and the Ansible-based registration. Wait until the overcloud deployment completes. Check the subscription details on your overcloud nodes. For example, log into a Controller node and run the following commands: In addition to the director-based registration method, you can also manually register after deployment. 5.9. Running Ansible-based registration manually You can perform manual Ansible-based registration on a deployed overcloud. You accomplish this using the director's dynamic inventory script to define node roles as host groups and then run a playbook against them using ansible-playbook . The following example shows how to manually register Controller nodes using a playbook. Procedure Create a playbook with that using the redhat_subscription modules to register your nodes. For example, the following playbook applies to Controller nodes: This play contains three tasks: Register the node using an activation key. Disable any auto-enabled repositories. Enable only the repositories relevant to the Controller node. The repositories are listed with the repos variable. After deploying the overcloud, you can run the following command so that Ansible executes the playbook ( ansible-osp-registration.yml ) against your overcloud: This command does the following: Runs the dynamic inventory script to get a list of host and their groups. Applies the playbook tasks to the nodes in the group defined in the playbook's hosts parameter, which in this case is the Controller group. 5.10. Locking the environment to a Red Hat Enterprise Linux release Red Hat OpenStack Platform 16.0 is supported on Red Hat Enterprise Linux 8.1. After deploying your overcloud, lock the overcloud repositories to the Red Hat Enterprise Linux 8.1 release. Prerequisites You have deployed an overcloud with all nodes registered with the Red Hat Subscription Manager (RHSM) composable service. Procedure Log into the undercloud as the stack user. Source the stackrc file: Create a static inventory file of your overcloud: If you use an overcloud name different to the default overcloud name of overcloud , set the name of your overcloud with the --plan option. Create a playbook that contains a task to lock the operating system version to Red Hat Enterprise Linux 8.1 on all nodes: Run the set_release.yaml playbook: Note To manually lock a node to a version, log in to the node and run the subscription-manager release command: | [
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml",
"parameter_defaults: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.8-for-rhel-8-x86_64-rpms - advanced-virt-for-rhel-8-x86_64-rpms - openstack-16-for-rhel-8-x86_64-rpms - rhceph-4-osd-for-rhel-8-x86_64-rpms - rhceph-4-mon-for-rhel-8-x86_64-rpms - rhceph-4-tools-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\"",
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml parameter_defaults: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.8-for-rhel-8-x86_64-rpms - advanced-virt-for-rhel-8-x86_64-rpms - openstack-16-for-rhel-8-x86_64-rpms - rhceph-4-osd-for-rhel-8-x86_64-rpms - rhceph-4-mon-for-rhel-8-x86_64-rpms - rhceph-4-tools-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"1a85f9223e3d5e43013e3d6e8ff506fd\" rhsm_method: \"portal\"",
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml parameter_defaults: ControllerParameters: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.8-for-rhel-8-x86_64-rpms - advanced-virt-for-rhel-8-x86_64-rpms - openstack-16-for-rhel-8-x86_64-rpms - rhceph-4-mon-for-rhel-8-x86_64-rpms - rhceph-4-tools-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"55d251f1490556f3e75aa37e89e10ce5\" rhsm_method: \"portal\" ComputeParameters: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.8-for-rhel-8-x86_64-rpms - advanced-virt-for-rhel-8-x86_64-rpms - openstack-16-for-rhel-8-x86_64-rpms - rhceph-4-tools-for-rhel-8-x86_64-rpms rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"55d251f1490556f3e75aa37e89e10ce5\" rhsm_method: \"portal\" CephStorageParameters: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-rpms - rhel-8-for-x86_64-appstream-rpms - rhel-8-for-x86_64-highavailability-rpms - ansible-2.9-for-rhel-8-x86_64-rpms - openstack-16-deployment-tools-for-rhel-8-x86_64-rpms - rhceph-4-osd-for-rhel-8-x86_64-rpms rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"68790a7aa2dc9dc50a9bc39fabc55e0d\" rhsm_method: \"portal\"",
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml parameter_defaults: RhsmVars: rhsm_activation_key: \"myactivationkey\" rhsm_method: \"satellite\" rhsm_org_id: \"ACME\" rhsm_server_hostname: satellite.example.com\" rhsm_baseurl: \"https://satellite.example.com/pulp/repos\"",
"- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 ServicesDefault: - OS::TripleO::Services::Rhsm",
"resource_registry: OS::TripleO::NodeExtraConfig: rhel-registration.yaml",
"resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/deployment/rhsm/rhsm-baremetal-ansible.yaml",
"openstack overcloud deploy <other cli args> -e ~/templates/rhsm.yaml",
"sudo subscription-manager status sudo subscription-manager list --consumed",
"--- - name: Register Controller nodes hosts: Controller become: yes vars: repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.8-for-rhel-8-x86_64-rpms - advanced-virt-for-rhel-8-x86_64-rpms - openstack-16-for-rhel-8-x86_64-rpms - rhceph-4-mon-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms tasks: - name: Register system redhat_subscription: username: myusername password: p@55w0rd! org_id: 1234567 pool_ids: 1a85f9223e3d5e43013e3d6e8ff506fd - name: Disable all repos command: \"subscription-manager repos --disable *\" - name: Enable Controller node repos command: \"subscription-manager repos --enable {{ item }}\" with_items: \"{{ repos }}\"",
"ansible-playbook -i /usr/bin/tripleo-ansible-inventory ansible-osp-registration.yml",
"source ~/stackrc",
"tripleo-ansible-inventory --ansible_ssh_user heat-admin --static-yaml-inventory ~/inventory.yaml",
"cat > ~/set_release.yaml <<'EOF' - hosts: overcloud gather_facts: false tasks: - name: set release to 8.1 command: subscription-manager release --set=8.1 become: true EOF",
"ansible-playbook -i ~/inventory.yaml -f 25 ~/set_release.yaml",
"sudo subscription-manager release --set=8.1"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/ansible-based-registration |
Chapter 1. Red Hat Decision Manager Spring Boot business applications | Chapter 1. Red Hat Decision Manager Spring Boot business applications Spring Framework is a Java platform that provides comprehensive infrastructure support for developing Java applications. Spring Boot is a lightweight framework based on Spring Boot starters. Spring Boot starters are pom.xml files that contain a set of dependency descriptors that you can include in your Spring Boot project. Red Hat Decision Manager Spring Boot business applications are flexible, UI-agnostic logical groupings of individual services that provide certain business capabilities. Business applications are based on Spring Boot starters. They are usually deployed separately and can be versioned individually. A complete business application enables a domain to achieve specific business goals, for example, order management or accommodation management. After you create and configure your business application, you can deploy it to an existing service or to the cloud, through OpenShift. Business applications can contain one or more of the following projects and more than one project of the same type: Business assets (KJAR): Contains business processes, rules, and forms and are easily imported into Business Central. Data model: Data model projects provide common data structures that are shared between the service projects and business assets projects. This enables proper encapsulation, promotes reuse, and reduces shortcuts. Each service project can expose its own public data model. Dynamic assets: Contains assets that you can use with case management. Service: A deployable project that provides the actual service with various capabilities. It includes the business logic that operates your business. In most cases, a service project includes business assets and data model projects. A business application can split services into smaller component service projects for better manageability. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/bus_app_business-applications |
17.4. Accessing Delegated Services | 17.4. Accessing Delegated Services For both services and hosts, if a client has delegated authority, it can obtain a keytab for that principal on the local machine. For services, this has the format service/hostname@REALM . For hosts, the service is host . With kinit , use the -k option to load a keytab and the -t option to specify the keytab. For example: To access a host: To access a service: | [
"kinit -kt /etc/krb5.keytab host/[email protected]",
"kinit -kt /etc/httpd/conf/krb5.keytab HTTP/[email protected]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/accessing-service |
Chapter 18. Integrating with cloud management platforms | Chapter 18. Integrating with cloud management platforms You can integrate Red Hat Advanced Cluster Security for Kubernetes (RHACS) with different cloud management platforms to discover potential clusters to secure. The cluster discovery aims to gain a detailed overview of the cluster assets already or not yet secured by RHACS. The clusters discovered from a cloud management platform are accessible from the Platform Configuration Clusters Discovered clusters page. RHACS matches the discovered clusters against already secured clusters. Based on the result of the matching, a discovered cluster has one of the following statuses: Secured : The cluster is secured by RHACS. Unsecured : The cluster is not secured by RHACS. Undetermined : The metadata collected from secured clusters is not enough for a unique match. The cluster is either secured or unsecured. For successful cluster matching, ensure that the following conditions are met: Sensors running on secured clusters have been updated to the latest version. Access to instance tags via the metadata service has been granted for secured clusters running on AWS. Sensors require access to the AWS EC2 instance tags to determine the cluster status. You can integrate RHACS with the following cloud management platforms: Paladin Cloud OpenShift Cluster Manager 18.1. Configuring Paladin Cloud integration To discover cluster assets from Paladin Cloud, create a new integration in Red Hat Advanced Cluster Security for Kubernetes. Prerequisites A Paladin Cloud account. A Paladin Cloud API token. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Cloud source integrations section and select Paladin Cloud . Click New integration . Enter a name for Integration name . Enter the Paladin Cloud API endpoint for Paladin Cloud endpoint . The default is https://api.paladincloud.io . Enter the Paladin Cloud API token for Paladin Cloud token . Select Test to confirm that authentication is working. Select Create to generate the configuration. Once configured, Red Hat Advanced Cluster Security for Kubernetes discovers cluster assets from your connected Paladin Cloud account. 18.2. Configuring Red Hat OpenShift Cluster Manager integration To discover cluster assets from Red Hat OpenShift Cluster Manager, create a new integration in Red Hat Advanced Cluster Security for Kubernetes. Prerequisites A Red Hat account. A Red Hat service account . Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Cloud source integrations section and select Red Hat OpenShift Cluster Manager . Click New integration . Enter a name for Integration name . Enter the Red Hat OpenShift Cluster Manager API endpoint for Endpoint . The default is https://api.openshift.com . Enter the Red Hat service account credentials for Client ID and Client secret . Select Test to confirm that authentication is working. Select Create to generate the configuration. Once configured, Red Hat Advanced Cluster Security for Kubernetes discovers cluster assets from your connected Red Hat account. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-with-cloud-management-platforms |
Chapter 2. Power management auditing and analysis | Chapter 2. Power management auditing and analysis 2.1. Audit and analysis overview The detailed manual audit, analysis, and tuning of a single system is usually the exception because the time and cost spent to do so typically outweighs the benefits gained from these last pieces of system tuning. However, performing these tasks once for a large number of nearly identical systems where you can reuse the same settings for all systems can be very useful. For example, consider the deployment of thousands of desktop systems, or a HPC cluster where the machines are nearly identical. Another reason to do auditing and analysis is to provide a basis for comparison against which you can identify regressions or changes in system behavior in the future. The results of this analysis can be very helpful in cases where hardware, BIOS, or software updates happen regularly and you want to avoid any surprises with regard to power consumption. Generally, a thorough audit and analysis gives you a much better idea of what is really happening on a particular system. Auditing and analyzing a system with regard to power consumption is relatively hard, even with the most modern systems available. Most systems do not provide the necessary means to measure power use via software. Exceptions exist though: the ILO management console of Hewlett Packard server systems has a power management module that you can access through the web. IBM provides a similar solution in their BladeCenter power management module. On some Dell systems, the IT Assistant offers power monitoring capabilities as well. Other vendors are likely to offer similar capabilities for their server platforms, but as can be seen there is no single solution available that is supported by all vendors. If your system has no inbuilt mechanism to measure power consumption, a few other choices exist. You could install a special power supply for your system that offers power consumption information through USB. The Gigabyte Odin GT 550 W PC power supply is one such example. As a last resort, some external watt meters like the Watts up? PRO have a USB connector. Direct measurements of power consumption is often only necessary to maximize savings as far as possible. Fortunately, other means are available to measure if changes are in effect or how the system is behaving. This chapter describes the necessary tools. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/audit_and_analysis |
Chapter 3. Enabling Linux control group version 1 (cgroup v1) | Chapter 3. Enabling Linux control group version 1 (cgroup v1) As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.16 will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 or later will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this section. 3.1. Enabling Linux cgroup v1 during installation You can enable Linux control group version 1 (cgroup v1) when you install a cluster by creating installation manifests. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Procedure Create or edit the node.config object to specify the v1 cgroup: apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: "v2" Proceed with the installation as usual. Additional resources OpenShift Container Platform installation overview Configuring the Linux cgroup on your nodes | [
"apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v2\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installation_configuration/enabling-cgroup-v1 |
1.4. Installing Red Hat High Availability Add-On software | 1.4. Installing Red Hat High Availability Add-On software To install Red Hat High Availability Add-On software, you must have entitlements for the software. If you are using the luci configuration GUI, you can let it install the cluster software. If you are using other tools to configure the cluster, secure and install the software as you would with Red Hat Enterprise Linux software. You can use the following yum install command to install the Red Hat High Availability Add-On software packages: Note that installing only the rgmanager will pull in all necessary dependencies to create an HA cluster from the HighAvailability channel. The lvm2-cluster and gfs2-utils packages are part of ResilientStorage channel and may not be needed by your site. Warning After you install the Red Hat High Availability Add-On packages, you should ensure that your software update preferences are set so that nothing is installed automatically. Installation on a running cluster can cause unexpected behaviors. Upgrading Red Hat High Availability Add-On Software It is possible to upgrade the cluster software on a given major release of Red Hat Enterprise Linux without taking the cluster out of production. Doing so requires disabling the cluster software on one host at a time, upgrading the software, and restarting the cluster software on that host. Shut down all cluster services on a single cluster node. For instructions on stopping cluster software on a node, see Section 9.1.2, "Stopping Cluster Software" . It may be desirable to manually relocate cluster-managed services and virtual machines off of the host prior to stopping rgmanager . Execute the yum update command to update installed packages. Reboot the cluster node or restart the cluster services manually. For instructions on starting cluster software on a node, see Section 9.1.1, "Starting Cluster Software" . | [
"yum install rgmanager lvm2-cluster gfs2-utils"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-install-clust-sw-ca |
Chapter 8. Supported integration products | Chapter 8. Supported integration products AMQ Streams 1.8 supports integration with the following Red Hat products. Red Hat Single Sign-On 7.4 and later Provides OAuth 2.0 authentication and OAuth 2.0 authorization. Red Hat 3scale API Management 2.6 and later Secures the Kafka Bridge and provides additional API management features. Red Hat Debezium 1.5 Monitors databases and creates event streams. Red Hat Service Registry 2.0 Provides a centralized store of service schemas for data streaming. For information on the functionality these products can introduce to your AMQ Streams deployment, refer to the AMQ Streams 1.8 documentation. Additional resources Red Hat Single Sign-On Supported Configurations Red Hat 3scale API Management Supported Configurations Red Hat Debezium Supported Configurations Red Hat Service Registry Supported Configurations | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_openshift/supported-config-str |
Building applications | Building applications OpenShift Container Platform 4.14 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view",
"oc project <project_name> 1",
"oc status",
"oc delete project <project_name> 1",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected].",
"oc create -f <filename>",
"oc create -f <filename> -n <project>",
"kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"",
"oc process -f <filename> -l name=otherLabel",
"oc process --parameters -f <filename>",
"oc process --parameters -n <project> <template_name>",
"oc process --parameters -n openshift rails-postgresql-example",
"NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB",
"oc process -f <filename>",
"oc process <template_name>",
"oc process -f <filename> | oc create -f -",
"oc process <template> | oc create -f -",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -",
"cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql --param-file=postgres.env",
"sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-",
"oc edit template <template>",
"oc get templates -n openshift",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10",
"kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2",
"parameters: - name: USERNAME description: \"The user name for Joe\" value: joe",
"parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"",
"parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"",
"{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10",
"kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath",
"{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }",
"\"template.alpha.openshift.io/wait-for-ready\": \"true\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:",
"oc get -o yaml all > <yaml_filename>",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test",
"sudo yum install -y postgresql postgresql-server postgresql-devel",
"sudo postgresql-setup initdb",
"sudo systemctl start postgresql.service",
"sudo -u postgres createuser -s rails",
"gem install rails",
"Successfully installed rails-4.3.0 1 gem installed",
"rails new rails-app --database=postgresql",
"cd rails-app",
"gem 'pg'",
"bundle install",
"default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>",
"rake db:create",
"rails generate controller welcome index",
"root 'welcome#index'",
"rails server",
"<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>",
"ls -1",
"app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor",
"git init",
"git add .",
"git commit -m \"initial commit\"",
"git remote add origin [email protected]:<namespace/repository-name>.git",
"git push",
"oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"",
"oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password",
"-e POSTGRESQL_ADMIN_PASSWORD=admin_pw",
"oc get pods --watch",
"oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql",
"oc get dc rails-app -o json",
"env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],",
"oc logs -f build/rails-app-1",
"oc get pods",
"oc rsh <frontend_pod_id>",
"RAILS_ENV=production bundle exec rake db:migrate",
"oc expose service rails-app --hostname=www.example.com",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi EOD",
"postgrescluster.postgres-operator.crunchydata.com/hippo created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE hippo-backup-9rxm-88rzq 0/1 Completed 0 2m2s hippo-instance1-6psd-0 4/4 Running 0 3m28s hippo-repo-host-0 2/2 Running 0 3m28s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc expose service spring-petclinic -n my-petclinic",
"route.route.openshift.io/spring-petclinic exposed",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s",
"for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'USDi' -exec echo -n {}:\" \" \\; -exec cat {} \\;'; echo; done",
"/bindings/spring-petclinic-pgcluster/username: <username> /bindings/spring-petclinic-pgcluster/password: <password> /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"oc apply -f - << EOD --- apiVersion: v1 kind: Namespace metadata: name: my-petclinic --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: postgres-operator-group namespace: my-petclinic --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-multiarch-catalog namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/ibm/operator-registry-<architecture> 1 imagePullPolicy: IfNotPresent displayName: ibm-multiarch-catalog updateStrategy: registryPoll: interval: 30m --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: postgresql-operator-dev4devs-com namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: postgresql-operator-dev4devs-com source: ibm-multiarch-catalog sourceNamespace: openshift-marketplace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: database-view labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgresql.dev4devs.com resources: - databases verbs: - get - list EOD",
"oc get subs -n openshift-operators",
"NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable",
"oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: \"5432\" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: \"sampledb\" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: \"samplepwd\" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: \"sampleuser\" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD",
"database.postgresql.dev4devs.com/sampledatabase created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: \"true\" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: status: binding: name: hippo-pguser-hippo",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"example.com\" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments",
"apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service workload: apiVersion: apps/v1 kind: Deployment name: spring-petclinic",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"service.binding(/<NAME>)?: \"<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)\"",
"apiVersion: apps.example.org/v1beta1 kind: Database metadata: name: my-db namespace: my-petclinic annotations: service.binding/username: path={.spec.name},optional=true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/type\": \"postgresql\" 1",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/uri\": \"path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url\" spec: status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com",
"status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/tags\": \"path={.spec.tags},elementType=sliceOfStrings\" spec: tags: - knowledge - is - power",
"/bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power",
"spec: tags: - knowledge - is - power",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/url\": \"path={.spec.connections},elementType=sliceOfStrings,sourceValue=url\" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com",
"USDSERVICE_BINDING_ROOT 1 ├── account-database 2 │ ├── type 3 │ ├── provider 4 │ ├── uri │ ├── username │ └── password └── transaction-event-stream 5 ├── type ├── connection-count ├── uri ├── certificates └── private-key",
"import os username = os.getenv(\"USERNAME\") password = os.getenv(\"PASSWORD\")",
"from pyservicebinding import binding try: sb = binding.ServiceBinding() except binding.ServiceBindingRootMissingError as msg: # log the error message and retry/exit print(\"SERVICE_BINDING_ROOT env var not set\") sb = binding.ServiceBinding() bindings_list = sb.bindings(\"postgresql\")",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments",
"host: hippo-pgbouncer port: 5432",
"DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432",
"application: name: spring-petclinic group: apps version: v1 resource: deployments",
"services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo",
"DATABASE_HOST: hippo-pgbouncer",
"POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: application: labelSelector: 1 matchLabels: environment: production group: apps version: v1 resource: deployments services: group: \"\" version: v1 kind: Secret name: super-secret-data",
"apiVersion: servicebindings.io/v1beta1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: workload: selector: 1 matchLabels: environment: production apiVersion: app/v1 kind: Deployment service: apiVersion: v1 kind: Secret name: super-secret-data",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: \"31793\" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {}",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: \"\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1",
"apiVersion: servicebinding.io/v1beta1 kind: ClusterWorkloadResourceMapping metadata: name: cronjobs.batch 1 spec: versions: - version: \"v1\" 2 annotations: .spec.jobTemplate.spec.template.metadata.annotations 3 containers: - path: .spec.jobTemplate.spec.template.spec.containers[*] 4 - path: .spec.jobTemplate.spec.template.spec.initContainers[*] name: .name 5 env: .env 6 volumeMounts: .volumeMounts 7 volumes: .spec.jobTemplate.spec.template.spec.volumes 8",
"oc delete ServiceBinding <.metadata.name>",
"oc delete ServiceBinding spring-petclinic-pgcluster",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"projecthelmchartrepository.helm.openshift.io/azure-sample-repo created",
"oc get projecthelmchartrepositories --namespace my-namespace",
"NAME AGE azure-sample-repo 1m",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA",
"oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true",
"oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true",
"oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b",
"apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6",
"apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5",
"apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9",
"oc create -f <file> [-n <project_name>]",
"oc create -f core-object-counts.yaml -n demoproject",
"oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1",
"oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4",
"resourcequota \"test\" created",
"oc describe quota test",
"Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4",
"oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'",
"openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0",
"apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1",
"oc create -f gpu-quota.yaml",
"resourcequota/gpu-quota created",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1",
"apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1",
"oc create -f gpu-pod.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1",
"oc create -f gpu-pod.yaml",
"Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1",
"oc get quota -n demoproject",
"NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10",
"oc describe quota core-object-counts -n demoproject",
"Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7",
"oc create -f template.yaml -n openshift-config",
"oc get templates -n openshift-config",
"oc edit template <project_request_template> -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request",
"oc new-project <project_name>",
"oc get resourcequotas",
"oc describe resourcequotas <resource_quota_name>",
"oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"",
"oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend",
"oc describe AppliedClusterResourceQuota",
"Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8",
"kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19",
"oc create -f <file-name>.yaml",
"oc describe pod my-application",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container",
"oc describe pod pod1",
". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms",
"oc adm prune <object_type> <options>",
"oc adm prune groups --sync-config=path/to/sync/config [<options>]",
"oc adm prune groups --sync-config=ldap-sync-config.yaml",
"oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm",
"oc adm prune deployments [<options>]",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"oc adm prune builds [<options>]",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"",
"oc create -f <filename>.yaml",
"kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner",
"oc adm prune images [<options>]",
"oc rollout restart deployment/image-registry -n openshift-image-registry",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m",
"oc adm prune images --prune-over-size-limit",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm",
"oc adm prune images --prune-over-size-limit --confirm",
"oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'",
"myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1",
"error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client",
"error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]",
"error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge",
"service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)",
"oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'",
"time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'",
"Deleted 13374 blobs Freed up 2.835 GiB of disk space",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge",
"oc idle <service>",
"oc idle --resource-names-file <filename>",
"oc scale --replicas=1 dc <dc_name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/building_applications/index |
Chapter 10. Hosts | Chapter 10. Hosts 10.1. Introduction to Hosts Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM). KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the Red Hat Virtualization Manager. A Red Hat Virtualization environment has one or more hosts attached to it. Red Hat Virtualization supports two methods of installing hosts. You can use the Red Hat Virtualization Host (RHVH) installation media, or install hypervisor packages on a standard Red Hat Enterprise Linux installation. Note You can identify the host type of an individual host in the Red Hat Virtualization Manager by selecting the host's name to open the details view, and checking the OS Description under Software . Hosts use tuned profiles, which provide virtualization optimizations. For more information on tuned , see the Red Hat Enterprise Linux 7 Performance Tuning Guide . The Red Hat Virtualization Host has security features enabled. Security Enhanced Linux (SELinux) and the firewall are fully configured and on by default. The status of SELinux on a selected host is reported under SELinux mode in the General tab of the details view. The Manager can open required ports on Red Hat Enterprise Linux hosts when it adds them to the environment. A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Red Hat Enterprise Linux 7 AMD64/Intel 64 version. A physical host on the Red Hat Virtualization platform: Must belong to only one cluster in the system. Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions. Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation. Has a minimum of 2 GB RAM. Can have an assigned system administrator with system permissions. Administrators can receive the latest security advisories from the Red Hat Virtualization watch list. Subscribe to the Red Hat Virtualization watch list to receive new security advisories for Red Hat Virtualization products by email. Subscribe by completing this form: https://www.redhat.com/mailman/listinfo/rhsa-announce | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-hosts |
Chapter 5. System tags and groups | Chapter 5. System tags and groups Red Hat Insights for Red Hat Enterprise Linux enables administrators to filter groups of systems in inventory and in individual services using group tags. Groups are identified by the method of system data ingestion to Insights for Red Hat Enterprise Linux. Insights for Red Hat Enterprise Linux enables filtering groups of systems by those running SAP workloads, by Satellite host group, by Microsoft SQL Server workload, and by custom tags that are defined by system administrators with root access to configure the Insights client on the system. Note As of Spring 2022, inventory, advisor, compliance, vulnerability, patch, and policies enable filtering by groups and tags. Other services will follow. Important Unlike the other services that enable tagging, the compliance service sets tags within lists of systems in the compliance service UI. For more information, see the following section Group and tag filters in the compliance service . Use the global, Filter results box to filter by SAP workloads, Satellite host groups, MS SQL Server workloads, or by custom tags added to the Insights client configuration file. Prerequisites The following prerequisites and conditions must be met to use the tagging features in Red Hat Insights for Red Hat Enterprise Linux: The Red Hat Insights client is installed and registered on each system. You must have root permissions, or their equivalent, to create custom tags or change the /etc/insights-client/tags.yaml file. 5.1. Group and tag filters in the compliance service The compliance service enables users to apply tag and group filters to systems reporting compliance data; however, they are not set using the Filter by status dropdown. Unlike most of the other services in the Insights for Red Hat Enterprise Linux application, the compliance service only shows data for systems under the following conditions: The system is associated with a compliance service security policy. The system is reporting compliance data to insights using the insights-client --compliance command. Because of those conditions, compliance-service users have to set tag and group filters using the primary and secondary filters located above lists of systems in the compliance service UI. Tag and group filters above systems list in the compliance service 5.2. SAP workloads As Linux becomes the mandatory operating system for SAP ERP workloads in 2025, Red Hat Enterprise Linux and Red Hat Insights for Red Hat Enterprise Linux are working to make Insights for Red Hat Enterprise Linux the management tool of choice for SAP administrators. As part of this ongoing effort, Insights for Red Hat Enterprise Linux automatically tags systems running SAP workloads and by SAP ID (SID), without any customization needed by administrators. Users can easily filter those workloads throughout the Insights for Red Hat Enterprise Linux application by using the global Filter by tags drop-down menu. 5.3. Satellite host groups Satellite host groups are configured in Satellite and recognized automatically by Insights for Red Hat Enterprise Linux. 5.4. Microsoft SQL Server workloads Using the global Filter by tags feature, Red Hat Insights for Red Hat Enterprise Linux users can select groups of systems running Microsoft SQL Server workloads. In May of 2019, the Red Hat Insights team introduced a new set of Insights for Red Hat Enterprise Linux recommendations for Microsoft SQL Server running on Red Hat Enterprise Linux (RHEL). These rules alert administrators to operating system level configurations that do not conform to the documented recommendations from Microsoft and Red Hat. A limitation of these rules was that they primarily analyzed the operating system and not the database itself. The latest release of Insights for Red Hat Enterprise Linux and RHEL 8.5, introduces Microsoft SQL Assessment API. The SQL Assessment API provides a mechanism to evaluate the database configuration of MS SQL Server for best practices. The API is delivered with a rule set containing best practice rules suggested by the Microsoft SQL Server Team. While this rule set is enhanced with the release of new versions, the API is built with the intent to give a highly customizable and extensible solution, which enables users to tune the default rules and create their own. The SQL Assessment API is supported by PowerShell for Linux (available from Microsoft), and Microsoft has developed a PowerShell script that can be used to call the API and store its results as a JSON formatted file. With RHEL 8.5, the Insights client now uploads this JSON file and presents the results in an easy-to-understand format in the Insights for Red Hat Enterprise Linux UI. For more information about SQL Server assessment in Insights for Red Hat Enterprise Linux, see SQL Server database best practices now available through Red Hat Insights . 5.4.1. Setting up SQL Server assessments To configure the Microsoft SQL Assessment API to provide information to Red Hat Insights, the database administrator needs to take the following steps. Procedure In the database you wish to assess, create a login for SQL Server assessments using SQL Authentication. The following Transact-SQL creates a login. Replace <*PASSWORD*> with a strong password: Store the credentials for login on the system as follows, again replacing <*PASSWORD*> with the password you used in step 1. Secure the credentials used by the assessment tool by ensuring that only the mssql user can access the credentials. Download PowerShell from the microsoft-tools repository. This is the same repository you configured when you installed the mssql-tools and mssqlodbc17 packages as part of SQL Server installation. Install the SQLServer module for PowerShell. This module includes the assessment API. Download the runassessment script from the Microsoft examples GitHub repository. Ensure it is owned and executable by mssql. Create the directory that will store the log file used by Red Hat Insights. Again, make sure it is owned and executable by mssql. You can now create your first assessment, but be sure to do so as the user mssql so that subsequent assessments can be run automatically via cron or systemd more securely as the mssql user. Insights for Red Hat Enterprise Linux will automatically include the assessment time it runs, or you can initiate Insights client by running this command: 5.4.1.1. Setting up the SQL Assessment on a timer Because SQL Server Assessments can take 10 minutes or more to complete, it may or may not make sense for you to run the assessment process automatically every day. If you would like to run them automatically, the Red Hat SQL Server community has created systemd service and timer files to use with the assessment tool. Procedure Download the following files from Red Hat public SQL Server Community of Practice GitHub site . mssql-runassessment.service mssql-runassessment.timer Install both files in the directory /etc/systemd/system/ : Enable the timer with: 5.5. Custom system tagging By applying custom grouping and tagging to your systems, you can add contextual markers to individual systems, filter by those tags in the Insights for Red Hat Enterprise Linux application, and more easily focus on related systems. This functionality can be especially valuable when deploying Insights for Red Hat Enterprise Linux at scale, with many hundreds or thousands of systems under management. In addition to the ability to add custom tags to several Insights for Red Hat Enterprise Linux services, you can add predefined tags. The advisor service can use those tags to create targeted recommendations for your systems that might require more attention, such as those systems that require a higher level of security. Note To create custom and predefined tags, you must have root permissions, or their equivalent, to add to, or change the /etc/insights-client/tags.yaml file. 5.5.1. Tag structure Tags use a namespace/key=value paired structure. Namespace. The namespace is the name of the ingestion point, insights-client , and cannot be changed. The tags.yaml file is abstracted from the namespace, which is injected by the Insights client before upload. Key. The key can be a user-chosen key or a predefined key from the system. You can use a mix of capitalization, letters, numbers, symbols and whitespace. Value. Define your own descriptive string value. You can use a mix of capitalization, letters, numbers, symbols and whitespace. Note The advisor service includes Red Hat-supported predefined tags. 5.5.2. Creating a tags.yaml file and adding a custom group Create and add tags to /etc/insights-client/tags.yaml simply by using insights-client --group=<name-you-choose> , which performs the following actions: Creates the etc/insights-client/tags.yaml file Adds the group= key and <name-you-choose> value to tags.yaml Uploads a fresh archive from the system to the Insights for Red Hat Enterprise Linux application so the new tag is immediately visible along with your latest results After creating the initial group tag, add additional tags as needed by editing the /etc/insights-client/tags.yaml file. The following procedure shows how to create the /etc/insights-client/tags.yaml file and the initial group, then verify the tag exists in the Insights for Red Hat Enterprise Linux inventory. Procedure to create new group Run the following command as root, adding your custom group name after --group= : Example of tags.yaml format The following example of a tags.yaml file shows an example of file format and additional tags added for the new group: Procedure to verify your custom group was created Navigate to Red Hat Insights > RHEL > Inventory and log in if necessary. Click the Filter results dropdown menu. Scroll through the list or use the search function to locate the tag. Click the tag to filter by it. Verify that your system is among the results on the advisor systems list. Procedure to verify that the system is tagged Navigate to Red Hat Insights > RHEL > Inventory and log in if necessary. Activate the Name filter and begin typing the system name until you see your system, then select it. Verify that, to the system name, the tag symbol is darkened and shows a number representing the correct number of tags applied. 5.5.3. Editing tags.yaml to add or change tags After creating the group filter, edit the contents of /etc/insights-client/tags.yaml as needed to add or modify tags. Procedure Using the command line, open the tag configuration file for editing. [root@server ~]# vi /etc/insights-client/tags.yaml Edit content or add additional values as needed. The following example shows how you can organize tags.yaml when adding multiple tags to a system. Note Add as many key=value pairs as you need. Use a mix of capitalization, letters, numbers, symbols, and whitespace. Save your changes and close the editor. Optionally, generate an upload to Insights for Red Hat Enterprise Linux. 5.5.4. Using predefined system tags to get more accurate Red Hat Insights advisor service recommendations and enhanced security Red Hat Insights advisor service recommendations treat every system equally. However, some systems might require more security than others, or require different networking performance levels. In addition to the ability to add custom tags, Red Hat Insights for Red Hat Enterprise Linux provides predefined tags that the advisor service can use to create targeted recommendations for your systems that might require more attention. To opt in and get the extended security hardening and enhanced detection and remediation capabilities offered by predefined tags, you need to configure the tags. After configuration, the advisor service provides recommendations based on tailored severity levels, and preferred network performance that apply to your systems. To configure the tags, use the /etc/insights-client/tags.yaml file to tag systems with predefined tags in a similar way that you might use it to tag systems in the inventory service. The predefined tags are configured using the same key=value structure used to create custom tags. Details about the Red Hat-predefined tags are in the following table. Table 5.1. List of Supported Predefined Tags Key Value Note security normal (default) / strict With the normal (default) value, the advisor service compares the system's risk profile to a baseline derived from the default configuration of the most recent version of RHEL and from often-used usage patterns. This keeps recommendations focused, actionable, and low in numbers. With the strict value, the advisor service considers the system to be security-sensitive, causing specific recommendations to use a stricter baseline, potentially showing recommendations even on fresh up-to-date RHEL installations. network_performance null (default) / latency / throughput The preferred network performance (either latency or throughput according to your business requirement) would affect the severity of an advisor service recommendation to a system. Note The predefined tag keys names are reserved. If you already use the key security , with a value that differs from one of the predefined values, you will not see a change in your recommendations. You will only see a change in recommendations if your existing key=value is the same as one of the predefined keys. For example, if you have a key=value of security: high , your recommendations will not change because of the Red Hat-predefined tags. If you currently have a key=value pair of security: strict , you will see a change in the recommendations for your systems. Additional resources Using system tags to enable extended security hardening recommendations Leverage tags to make Red Hat Insights Advisor recommendations understand your environment better System tags and groups 5.5.5. Configuring predefined tags You can use the Red Hat Insights for Red Hat Enterprise Linux advisor service's predefined tags to adjust the behavior of recommendations for your systems to gain extended security hardening and enhanced detection and remediation capabilities. You can configure the predefined tags by following this procedure. Prerequisites You have root-level access to your system You have Insights client installed You have systems registered within the Insights client You have created the tags.yaml file. For information about creating the tags.yaml file, see Creating a tags.yaml file and adding a custom group . Procedure Using the command line, and your preferred editor, open /etc/insights-client/tags.yaml . (The following example uses Vim.) Edit the /etc/insights-client/tags.yaml file to add the predefined key=value pair for the tags. This example shows how to add security: strict and network_performance: latency tags. Save your changes. Close the editor. Optional: Run the insights-client command to generate an upload to Red Hat Insights for Red Hat Enterprise Linux, or wait until the scheduled Red Hat Insights upload. Confirming that predefined tags are in your production area After generating an upload to Red Hat Insights (or waiting for the scheduled Insights upload), you can find out whether the tags are in the production environment by accessing Red Hat Insights > RHEL > Inventory . Find your system and look for the newly created tags. You see a table that shows: Name Value Tag Source (for example, insights-client). The following image shows an example of what you see in inventory after creating the tag. Example of recommendations after applying a predefined tag The following image of the advisor service shows a system with the network_performance: latency tag configured. The system shows a recommendation with a higher Total Risk level of Important. The system without the network_performance: latency tag has a Total Risk of Moderate. You can make decisions about prioritizing the system with higher Total Risk. | [
"USE [master] GO CREATE LOGIN [assessmentLogin] with PASSWORD= N'<*PASSWORD*>' ALTER SERVER ROLE [sysadmin] ADD MEMBER [assessmentLogin] GO",
"echo \"assessmentLogin\" > /var/opt/mssql/secrets/assessment echo \"<*PASSWORD*>\" >> /var/opt/mssql/secrets/assessment",
"chmod 0600 /var/opt/mssql/secrets/assessment chown mssql:mssql /var/opt/mssql/secrets/assessment",
"yum -y install powershell",
"su mssql -c \"/usr/bin/pwsh -Command Install-Module SqlServer\"",
"/bin/curl -LJ0 -o /opt/mssql/bin/runassessment.ps1 https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/manage/sql-assessment-api/RHEL/runassessment.ps1 chown mssql:mssql /opt/mssql/bin/runassessment.ps1 chmod 0700 /opt/mssql/bin/runassessment.ps1",
"mkdir /var/opt/mssql/log/assessments/ chown mssql:mssql /var/opt/mssql/log/assessments/ chmod 0700 /var/opt/mssql/log/assessments/",
"su mssql -c \"pwsh -File /opt/mssql/bin/runassessment.ps1\"",
"insights-client",
"cp mssql-runassessment.service /etc/systemd/system/ cp mssql-runassessment.timer /etc/systemd/system/ chmod 644 /etc/systemd/system/",
"systemctl enable --now mssql-runassessment.timer",
"insights-client --group=<name-you-choose>",
"tags --- group: eastern-sap name: Jane Example contact: [email protected] Zone: eastern time zone Location: - gray_rack - basement Application: SAP",
"tags --- group: eastern-sap location: Boston description: - RHEL8 - SAP key 4: value",
"insights-client",
"vi /etc/insights-client/tags.yaml",
"cat /etc/insights-client/tags.yaml group: redhat location: Brisbane/Australia description: - RHEL8 - SAP security: strict network_performance: latency",
"insights-client"
]
| https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_policy_compliance_of_rhel_systems_with_fedramp/insights-system-tagging_compliance-managing-policies |
Chapter 4. Additional introspection operations | Chapter 4. Additional introspection operations In some situations, you might want to perform introspection outside of the standard overcloud deployment workflow. For example, you might want to introspect new nodes or refresh introspection data after replacing hardware on existing unused nodes. 4.1. Performing individual node introspection To perform a single introspection on an available node, set the node to management mode and perform the introspection. Procedure Set all nodes to a manageable state: Perform the introspection: After the introspection completes, the node changes to an available state. 4.2. Performing node introspection after initial introspection After an initial introspection, all nodes enter an available state due to the --provide option. To perform introspection on all nodes after the initial introspection, set the node to management mode and perform the introspection. Procedure Set all nodes to a manageable state Run the bulk introspection command: After the introspection completes, all nodes change to an available state. 4.3. Performing network introspection for interface information Network introspection retrieves link layer discovery protocol (LLDP) data from network switches. The following commands show a subset of LLDP information for all interfaces on a node, or full information for a particular node and interface. This can be useful for troubleshooting. Director enables LLDP data collection by default. Procedure To get a list of interfaces on a node, run the following command: For example: To view interface data and switch port information, run the following command: For example: 4.4. Retrieving hardware introspection details The Bare Metal service hardware-inspection-extras feature is enabled by default, and you can use it to retrieve hardware details for overcloud configuration. For more information about the inspection_extras parameter in the undercloud.conf file, see Director configuration parameters . For example, the numa_topology collector is part of the hardware-inspection extras and includes the following information for each NUMA node: RAM (in kilobytes) Physical CPU cores and their sibling threads NICs associated with the NUMA node Procedure To retrieve the information listed above, substitute <UUID> with the UUID of the bare-metal node to complete the following command: The following example shows the retrieved NUMA information for a bare-metal node: | [
"(undercloud) USD openstack baremetal node manage [NODE UUID]",
"(undercloud) USD openstack overcloud node introspect [NODE UUID] --provide",
"(undercloud) USD for node in USD(openstack baremetal node list --fields uuid -f value) ; do openstack baremetal node manage USDnode ; done",
"(undercloud) USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud) USD openstack baremetal introspection interface list [NODE UUID]",
"(undercloud) USD openstack baremetal introspection interface list c89397b7-a326-41a0-907d-79f8b86c7cd9 +-----------+-------------------+------------------------+-------------------+----------------+ | Interface | MAC Address | Switch Port VLAN IDs | Switch Chassis ID | Switch Port ID | +-----------+-------------------+------------------------+-------------------+----------------+ | p2p2 | 00:0a:f7:79:93:19 | [103, 102, 18, 20, 42] | 64:64:9b:31:12:00 | 510 | | p2p1 | 00:0a:f7:79:93:18 | [101] | 64:64:9b:31:12:00 | 507 | | em1 | c8:1f:66:c7:e8:2f | [162] | 08:81:f4:a6:b3:80 | 515 | | em2 | c8:1f:66:c7:e8:30 | [182, 183] | 08:81:f4:a6:b3:80 | 559 | +-----------+-------------------+------------------------+-------------------+----------------+",
"(undercloud) USD openstack baremetal introspection interface show [NODE UUID] [INTERFACE]",
"(undercloud) USD openstack baremetal introspection interface show c89397b7-a326-41a0-907d-79f8b86c7cd9 p2p1 +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+ | interface | p2p1 | | mac | 00:0a:f7:79:93:18 | | node_ident | c89397b7-a326-41a0-907d-79f8b86c7cd9 | | switch_capabilities_enabled | [u'Bridge', u'Router'] | | switch_capabilities_support | [u'Bridge', u'Router'] | | switch_chassis_id | 64:64:9b:31:12:00 | | switch_port_autonegotiation_enabled | True | | switch_port_autonegotiation_support | True | | switch_port_description | ge-0/0/2.0 | | switch_port_id | 507 | | switch_port_link_aggregation_enabled | False | | switch_port_link_aggregation_id | 0 | | switch_port_link_aggregation_support | True | | switch_port_management_vlan_id | None | | switch_port_mau_type | Unknown | | switch_port_mtu | 1514 | | switch_port_physical_capabilities | [u'1000BASE-T fdx', u'100BASE-TX fdx', u'100BASE-TX hdx', u'10BASE-T fdx', u'10BASE-T hdx', u'Asym and Sym PAUSE fdx'] | | switch_port_protocol_vlan_enabled | None | | switch_port_protocol_vlan_ids | None | | switch_port_protocol_vlan_support | None | | switch_port_untagged_vlan_id | 101 | | switch_port_vlan_ids | [101] | | switch_port_vlans | [{u'name': u'RHOS13-PXE', u'id': 101}] | | switch_protocol_identities | None | | switch_system_name | rhos-compute-node-sw1 | +--------------------------------------+------------------------------------------------------------------------------------------------------------------------+",
"openstack baremetal introspection data save <UUID> | jq .numa_topology",
"{ \"cpus\": [ { \"cpu\": 1, \"thread_siblings\": [ 1, 17 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 10, 26 ], \"numa_node\": 1 }, { \"cpu\": 0, \"thread_siblings\": [ 0, 16 ], \"numa_node\": 0 }, { \"cpu\": 5, \"thread_siblings\": [ 13, 29 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 15, 31 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 7, 23 ], \"numa_node\": 0 }, { \"cpu\": 1, \"thread_siblings\": [ 9, 25 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 6, 22 ], \"numa_node\": 0 }, { \"cpu\": 3, \"thread_siblings\": [ 11, 27 ], \"numa_node\": 1 }, { \"cpu\": 5, \"thread_siblings\": [ 5, 21 ], \"numa_node\": 0 }, { \"cpu\": 4, \"thread_siblings\": [ 12, 28 ], \"numa_node\": 1 }, { \"cpu\": 4, \"thread_siblings\": [ 4, 20 ], \"numa_node\": 0 }, { \"cpu\": 0, \"thread_siblings\": [ 8, 24 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 14, 30 ], \"numa_node\": 1 }, { \"cpu\": 3, \"thread_siblings\": [ 3, 19 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 2, 18 ], \"numa_node\": 0 } ], \"ram\": [ { \"size_kb\": 66980172, \"numa_node\": 0 }, { \"size_kb\": 67108864, \"numa_node\": 1 } ], \"nics\": [ { \"name\": \"ens3f1\", \"numa_node\": 1 }, { \"name\": \"ens3f0\", \"numa_node\": 1 }, { \"name\": \"ens2f0\", \"numa_node\": 0 }, { \"name\": \"ens2f1\", \"numa_node\": 0 }, { \"name\": \"ens1f1\", \"numa_node\": 0 }, { \"name\": \"ens1f0\", \"numa_node\": 0 }, { \"name\": \"eno4\", \"numa_node\": 0 }, { \"name\": \"eno1\", \"numa_node\": 0 }, { \"name\": \"eno3\", \"numa_node\": 0 }, { \"name\": \"eno2\", \"numa_node\": 0 } ] }"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/customizing_your_red_hat_openstack_platform_deployment/assembly_additional-introspection-operations |
Chapter 83. Execution control in the decision engine | Chapter 83. Execution control in the decision engine When new rule data enters the working memory of the decision engine, rules may become fully matched and eligible for execution. A single working memory action can result in multiple eligible rule executions. When a rule is fully matched, the decision engine creates an activation instance, referencing the rule and the matched facts, and adds the activation onto the decision engine agenda. The agenda controls the execution order of these rule activations using a conflict resolution strategy. After the first call of fireAllRules() in the Java application, the decision engine cycles repeatedly through two phases: Agenda evaluation. In this phase, the decision engine selects all rules that can be executed. If no executable rules exist, the execution cycle ends. If an executable rule is found, the decision engine registers the activation in the agenda and then moves on to the working memory actions phase to perform rule consequence actions. Working memory actions. In this phase, the decision engine performs the rule consequence actions (the then portion of each rule) for all activated rules previously registered in the agenda. After all the consequence actions are complete or the main Java application process calls fireAllRules() again, the decision engine returns to the agenda evaluation phase to reassess rules. Figure 83.1. Two-phase execution process in the decision engine When multiple rules exist on the agenda, the execution of one rule may cause another rule to be removed from the agenda. To avoid this, you can define how and when rules are executed in the decision engine. Some common methods for defining rule execution order are by using rule salience, agenda groups, or activation groups. 83.1. Salience for rules Each rule has an integer salience attribute that determines the order of execution. Rules with a higher salience value are given higher priority when ordered in the activation queue. The default salience value for rules is zero, but the salience can be negative or positive. For example, the following sample DRL rules are listed in the decision engine stack in the order shown: The RuleB rule is listed second, but it has a higher salience value than the RuleA rule and is therefore executed first. 83.2. Agenda groups for rules An agenda group is a set of rules bound together by the same agenda-group rule attribute. Agenda groups partition rules on the decision engine agenda. At any one time, only one group has a focus that gives that group of rules priority for execution before rules in other agenda groups. You determine the focus with a setFocus() call for the agenda group. You can also define rules with an auto-focus attribute so that the time the rule is activated, the focus is automatically given to the entire agenda group to which the rule is assigned. Each time the setFocus() call is made in a Java application, the decision engine adds the specified agenda group to the top of the rule stack. The default agenda group "MAIN" contains all rules that do not belong to a specified agenda group and is executed first in the stack unless another group has the focus. For example, the following sample DRL rules belong to specified agenda groups and are listed in the decision engine stack in the order shown: Sample DRL rules for banking application For this example, the rules in the "report" agenda group must always be executed first and the rules in the "calculation" agenda group must always be executed second. Any remaining rules in other agenda groups can then be executed. Therefore, the "report" and "calculation" groups must receive the focus to be executed in that order, before other rules can be executed: Set the focus for the order of agenda group execution Agenda agenda = ksession.getAgenda(); agenda.getAgendaGroup( "report" ).setFocus(); agenda.getAgendaGroup( "calculation" ).setFocus(); ksession.fireAllRules(); You can also use the clear() method to cancel all the activations generated by the rules belonging to a given agenda group before each has had a chance to be executed: Cancel all other rule activations ksession.getAgenda().getAgendaGroup( "Group A" ).clear(); 83.3. Activation groups for rules An activation group is a set of rules bound together by the same activation-group rule attribute. In this group, only one rule can be executed. After conditions are met for a rule in that group to be executed, all other pending rule executions from that activation group are removed from the agenda. For example, the following sample DRL rules belong to the specified activation group and are listed in the decision engine stack in the order shown: Sample DRL rules for banking For this example, if the first rule in the "report" activation group is executed, the second rule in the group and all other executable rules on the agenda are removed from the agenda. 83.4. Rule execution modes and thread safety in the decision engine The decision engine supports the following rule execution modes that determine how and when the decision engine executes rules: Passive mode : (Default) The decision engine evaluates rules when a user or an application explicitly calls fireAllRules() . Passive mode in the decision engine is best for applications that require direct control over rule evaluation and execution, or for complex event processing (CEP) applications that use the pseudo clock implementation in the decision engine. Example CEP application code with the decision engine in passive mode KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration(); config.setOption( ClockTypeOption.get("pseudo") ); KieSession session = kbase.newKieSession( conf, null ); SessionPseudoClock clock = session.getSessionClock(); session.insert( tick1 ); session.fireAllRules(); clock.advanceTime(1, TimeUnit.SECONDS); session.insert( tick2 ); session.fireAllRules(); clock.advanceTime(1, TimeUnit.SECONDS); session.insert( tick3 ); session.fireAllRules(); session.dispose(); Active mode : If a user or application calls fireUntilHalt() , the decision engine starts in active mode and evaluates rules continually until the user or application explicitly calls halt() . Active mode in the decision engine is best for applications that delegate control of rule evaluation and execution to the decision engine, or for complex event processing (CEP) applications that use the real-time clock implementation in the decision engine. Active mode is also optimal for CEP applications that use active queries. Example CEP application code with the decision engine in active mode KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration(); config.setOption( ClockTypeOption.get("realtime") ); KieSession session = kbase.newKieSession( conf, null ); new Thread( new Runnable() { @Override public void run() { session.fireUntilHalt(); } } ).start(); session.insert( tick1 ); ... Thread.sleep( 1000L ); ... session.insert( tick2 ); ... Thread.sleep( 1000L ); ... session.insert( tick3 ); session.halt(); session.dispose(); This example calls fireUntilHalt() from a dedicated execution thread to prevent the current thread from being blocked indefinitely while the decision engine continues evaluating rules. The dedicated thread also enables you to call halt() at a later stage in the application code. Although you should avoid using both fireAllRules() and fireUntilHalt() calls, especially from different threads, the decision engine can handle such situations safely using thread-safety logic and an internal state machine. If a fireAllRules() call is in progress and you call fireUntilHalt() , the decision engine continues to run in passive mode until the fireAllRules() operation is complete and then starts in active mode in response to the fireUntilHalt() call. However, if the decision engine is running in active mode following a fireUntilHalt() call and you call fireAllRules() , the fireAllRules() call is ignored and the decision engine continues to run in active mode until you call halt() . For added thread safety in active mode, the decision engine supports a submit() method that you can use to group and perform operations on a KIE session in a thread-safe, atomic action: Example application code with submit() method to perform atomic operations in active mode KieSession session = ...; new Thread( new Runnable() { @Override public void run() { session.fireUntilHalt(); } } ).start(); final FactHandle fh = session.insert( fact_a ); ... Thread.sleep( 1000L ); ... session.submit( new KieSession.AtomicAction() { @Override public void execute( KieSession kieSession ) { fact_a.setField("value"); kieSession.update( fh, fact_a ); kieSession.insert( fact_1 ); kieSession.insert( fact_2 ); kieSession.insert( fact_3 ); } } ); ... Thread.sleep( 1000L ); ... session.insert( fact_z ); session.halt(); session.dispose(); Thread safety and atomic operations are also helpful from a client-side perspective. For example, you might need to insert more than one fact at a given time, but require the decision engine to consider the insertions as an atomic operation and to wait until all the insertions are complete before evaluating the rules again. 83.5. Fact propagation modes in the decision engine The decision engine supports the following fact propagation modes that determine how the decision engine progresses inserted facts through the engine network in preparation for rule execution: Lazy : (Default) Facts are propagated in batch collections at rule execution, not in real time as the facts are individually inserted by a user or application. As a result, the order in which the facts are ultimately propagated through the decision engine may be different from the order in which the facts were individually inserted. Immediate : Facts are propagated immediately in the order that they are inserted by a user or application. Eager : Facts are propagated lazily (in batch collections), but before rule execution. The decision engine uses this propagation behavior for rules that have the no-loop or lock-on-active attribute. By default, the Phreak rule algorithm in the decision engine uses lazy fact propagation for improved rule evaluation overall. However, in few cases, this lazy propagation behavior can alter the expected result of certain rule executions that may require immediate or eager propagation. For example, the following rule uses a specified query with a ? prefix to invoke the query in pull-only or passive fashion: Example rule with a passive query For this example, the rule should be executed only when a String that satisfies the query is inserted before the Integer , such as in the following example commands: Example commands that should trigger the rule execution KieSession ksession = ... ksession.insert("1"); ksession.insert(1); ksession.fireAllRules(); However, due to the default lazy propagation behavior in Phreak, the decision engine does not detect the insertion sequence of the two facts in this case, so this rule is executed regardless of String and Integer insertion order. For this example, immediate propagation is required for the expected rule evaluation. To alter the decision engine propagation mode to achieve the expected rule evaluation in this case, you can add the @Propagation(<type>) tag to your rule and set <type> to LAZY , IMMEDIATE , or EAGER . In the same example rule, the immediate propagation annotation enables the rule to be evaluated only when a String that satisfies the query is inserted before the Integer , as expected: Example rule with a passive query and specified propagation mode 83.6. Agenda evaluation filters The decision engine supports an AgendaFilter object in the filter interface that you can use to allow or deny the evaluation of specified rules during agenda evaluation. You can specify an agenda filter as part of a fireAllRules() call. The following example code permits only rules ending with the string "Test" to be evaluated and executed. All other rules are filtered out of the decision engine agenda. Example agenda filter definition ksession.fireAllRules( new RuleNameEndsWithAgendaFilter( "Test" ) ); | [
"rule \"RuleA\" salience 95 when USDfact : MyFact( field1 == true ) then System.out.println(\"Rule2 : \" + USDfact); update(USDfact); end rule \"RuleB\" salience 100 when USDfact : MyFact( field1 == false ) then System.out.println(\"Rule1 : \" + USDfact); USDfact.setField1(true); update(USDfact); end",
"rule \"Increase balance for credits\" agenda-group \"calculation\" when ap : AccountPeriod() acc : Account( USDaccountNo : accountNo ) CashFlow( type == CREDIT, accountNo == USDaccountNo, date >= ap.start && <= ap.end, USDamount : amount ) then acc.balance += USDamount; end",
"rule \"Print balance for AccountPeriod\" agenda-group \"report\" when ap : AccountPeriod() acc : Account() then System.out.println( acc.accountNo + \" : \" + acc.balance ); end",
"Agenda agenda = ksession.getAgenda(); agenda.getAgendaGroup( \"report\" ).setFocus(); agenda.getAgendaGroup( \"calculation\" ).setFocus(); ksession.fireAllRules();",
"ksession.getAgenda().getAgendaGroup( \"Group A\" ).clear();",
"rule \"Print balance for AccountPeriod1\" activation-group \"report\" when ap : AccountPeriod1() acc : Account() then System.out.println( acc.accountNo + \" : \" + acc.balance ); end",
"rule \"Print balance for AccountPeriod2\" activation-group \"report\" when ap : AccountPeriod2() acc : Account() then System.out.println( acc.accountNo + \" : \" + acc.balance ); end",
"KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration(); config.setOption( ClockTypeOption.get(\"pseudo\") ); KieSession session = kbase.newKieSession( conf, null ); SessionPseudoClock clock = session.getSessionClock(); session.insert( tick1 ); session.fireAllRules(); clock.advanceTime(1, TimeUnit.SECONDS); session.insert( tick2 ); session.fireAllRules(); clock.advanceTime(1, TimeUnit.SECONDS); session.insert( tick3 ); session.fireAllRules(); session.dispose();",
"KieSessionConfiguration config = KieServices.Factory.get().newKieSessionConfiguration(); config.setOption( ClockTypeOption.get(\"realtime\") ); KieSession session = kbase.newKieSession( conf, null ); new Thread( new Runnable() { @Override public void run() { session.fireUntilHalt(); } } ).start(); session.insert( tick1 ); ... Thread.sleep( 1000L ); session.insert( tick2 ); ... Thread.sleep( 1000L ); session.insert( tick3 ); session.halt(); session.dispose();",
"KieSession session = ...; new Thread( new Runnable() { @Override public void run() { session.fireUntilHalt(); } } ).start(); final FactHandle fh = session.insert( fact_a ); ... Thread.sleep( 1000L ); session.submit( new KieSession.AtomicAction() { @Override public void execute( KieSession kieSession ) { fact_a.setField(\"value\"); kieSession.update( fh, fact_a ); kieSession.insert( fact_1 ); kieSession.insert( fact_2 ); kieSession.insert( fact_3 ); } } ); ... Thread.sleep( 1000L ); session.insert( fact_z ); session.halt(); session.dispose();",
"query Q (Integer i) String( this == i.toString() ) end rule \"Rule\" when USDi : Integer() ?Q( USDi; ) then System.out.println( USDi ); end",
"KieSession ksession = ksession.insert(\"1\"); ksession.insert(1); ksession.fireAllRules();",
"query Q (Integer i) String( this == i.toString() ) end rule \"Rule\" @Propagation(IMMEDIATE) when USDi : Integer() ?Q( USDi; ) then System.out.println( USDi ); end",
"ksession.fireAllRules( new RuleNameEndsWithAgendaFilter( \"Test\" ) );"
]
| https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/execution-control-con_decision-engine |
Chapter 14. Developing Debezium custom data type converters | Chapter 14. Developing Debezium custom data type converters Important The use of custom-developed converters is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . Each field in a Debezium change event record represents a field or column in the source table or data collection. When a connector emits a change event record to Kafka, it converts the data type of each field in the source to a Kafka Connect schema type. Column values are likewise converted to match the schema type of the destination field. For each connector, a default mapping specifies how the connector converts each data type. These default mappings are described in the data types documentation for each connector. While the default mappings are generally sufficient, for some applications you might want to apply an alternate mapping. For example, you might need a custom mapping if the default mapping exports a column using the format of milliseconds since the UNIX epoch, but your downstream application can only consume the column values as formatted strings. You customize data type mappings by developing and deploying a custom converter. You configure custom converters to act on all columns of a certain type, or you can narrow their scope so that they apply to a specific table column only. The converter function intercepts data type conversion requests for any columns that match a specified criteria, and then performs the specified conversion. The converter ignores columns that do not match the specified criteria. Custom converters are Java classes that implement the Debezium service provider interface (SPI). You enable and configure a custom converter by setting the converters property in the connector configuration. The converters property specifies the converters that are available to a connector, and can include sub-properties that further modify conversion behavior. After you start a connector, the converters that are enabled in the connector configuration are instantiated and are added to a registry. The registry associates each converter with the columns or fields for it to process. Whenever Debezium processes a new change event, it invokes the configured converter to convert the columns or fields for which it is registered. 14.1. Creating a Debezium custom data type converter The following example shows a converter implementation of a Java class that implements the interface io.debezium.spi.converter.CustomConverter : public interface CustomConverter<S, F extends ConvertedField> { @FunctionalInterface interface Converter { 1 Object convert(Object input); } public interface ConverterRegistration<S> { 2 void register(S fieldSchema, Converter converter); 3 } void configure(Properties props); void converterFor(F field, ConverterRegistration<S> registration); 4 } 1 A function for converting data from one type to another. 2 Callback for registering a converter. 3 Registers the given schema and converter for the current field. Should not be invoked more than once for the same field. 4 Registers the customized value and schema converter for use with a specific field. Custom converter methods Implementations of the CustomConverter interface must include the following methods: configure() Passes the properties specified in the connector configuration to the converter instance. The configure method runs when the connector is initialized. You can use a converter with multiple connectors and modify its behavior based on the connector's property settings. The configure method accepts the following argument: props Contains the properties to pass to the converter instance. Each property specifies the format for converting the values of a particular type of column. converterFor() Registers the converter to process specific columns or fields in the data source. Debezium invokes the converterFor() method to prompt the converter to call registration for the conversion. The converterFor method runs once for each column. The method accepts the following arguments: field An object that passes metadata about the field or column that is processed. The column metadata can include the name of the column or field, the name of the table or collection, the data type, size, and so forth. registration An object of type io.debezium.spi.converter.CustomConverter.ConverterRegistration that provides the target schema definition and the code for converting the column data. The converter calls the registration parameter when the source column matches the type that the converter should process. calls the register method to define the converter for each column in the schema. Schemas are represented using the Kafka Connect SchemaBuilder API. 14.1.1. Debezium custom converter example The following example implements a simple converter that performs the following operations: Runs the configure method, which configures the converter based on the value of the schema.name property that is specified in the connector configuration. The converter configuration is specific to each instance. Runs the converterFor method, which registers the converter to process values in source columns for which the data type is set to isbn . Identifies the target STRING schema based on the value that is specified for the schema.name property. Converts ISBN data in the source column to String values. Example 14.1. A simple custom converter public static class IsbnConverter implements CustomConverter<SchemaBuilder, RelationalColumn> { private SchemaBuilder isbnSchema; @Override public void configure(Properties props) { isbnSchema = SchemaBuilder.string().name(props.getProperty("schema.name")); } @Override public void converterFor(RelationalColumn column, ConverterRegistration<SchemaBuilder> registration) { if ("isbn".equals(column.typeName())) { registration.register(isbnSchema, x -> x.toString()); } } } 14.1.2. Debezium and Kafka Connect API module dependencies A custom converter Java project has compile dependencies on the Debezium API and Kafka Connect API library modules. These compile dependencies must be included in your project's pom.xml , as shown in the following example: <dependency> <groupId>io.debezium</groupId> <artifactId>debezium-api</artifactId> <version>USD{version.debezium}</version> 1 </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>connect-api</artifactId> <version>USD{version.kafka}</version> 2 </dependency> 1 USD{version.debezium} represents the version of the Debezium connector. 2 USD{version.kafka} represents the version of Apache Kafka in your environment. 14.2. Using custom converters with Debezium connectors Custom converters act on specific columns or column types in a source table to specify how to convert the data types in the source to Kafka Connect schema types. To use a custom converter with a connector, you deploy the converter JAR file alongside the connector file, and then configure the connector to use the converter. 14.2.1. Deploying a custom converter Prerequisites You have a custom converter Java program. Procedure To use a custom converter with a Debezium connector, export the Java project to a JAR file, and copy the file to the directory that contains the JAR file for each Debezium connector that you want to use it with. For example, in a typical deployment, the Debezium connector files are stored in subdirectories of a Kafka Connect directory ( /kafka/connect ), with each connector JAR in its own subdirectory ( /kafka/connect/debezium-connector-db2 , /kafka/connect/debezium-connector-mysql , and so forth). To use a converter with a connector, add the converter JAR file to the connector's subdirectory. Note To use a converter with multiple connectors, you must place a copy of the converter JAR file in each connector subdirectory. 14.2.2. Configuring a connector to use a custom converter To enable a connector to use the custom converter, you add properties to the connector configuration that specify the converter name and class. If the converter requires further information to customize the formats of specific data types, you can also define other coniguration options to provide that information. Procedure Enable a converter for a connector instance by adding the following mandatory properties to the connector configuration: 1 The converters property is mandatory and enumerates a comma-separated list of symbolic names of the converter instances to use with the connector. The values listed for this property serve as prefixes in the names of other properties that you specify for the converter. 2 The <converterSymbolicName> .type property is mandatory, and specifies the name of the class that implements the converter. For example, for the earlier custom converter example , you would add the following properties to the connector configuration: To associate other properties with a custom converter, prefix the property names with the symbolic name of the converter, followed by a dot ( . ). The symbolic name is a label that you specify as a value for the converters property. For example, to add a property for the preceding isbn converter to specify the schema.name to pass to the configure method in the converter code, add the following property: Revised on 2024-01-08 18:45:08 UTC | [
"public interface CustomConverter<S, F extends ConvertedField> { @FunctionalInterface interface Converter { 1 Object convert(Object input); } public interface ConverterRegistration<S> { 2 void register(S fieldSchema, Converter converter); 3 } void configure(Properties props); void converterFor(F field, ConverterRegistration<S> registration); 4 }",
"public static class IsbnConverter implements CustomConverter<SchemaBuilder, RelationalColumn> { private SchemaBuilder isbnSchema; @Override public void configure(Properties props) { isbnSchema = SchemaBuilder.string().name(props.getProperty(\"schema.name\")); } @Override public void converterFor(RelationalColumn column, ConverterRegistration<SchemaBuilder> registration) { if (\"isbn\".equals(column.typeName())) { registration.register(isbnSchema, x -> x.toString()); } } }",
"<dependency> <groupId>io.debezium</groupId> <artifactId>debezium-api</artifactId> <version>USD{version.debezium}</version> 1 </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>connect-api</artifactId> <version>USD{version.kafka}</version> 2 </dependency>",
"converters: <converterSymbolicName> 1 <converterSymbolicName> .type: <fullyQualifiedConverterClassName> 2",
"converters: isbn isbn.type: io.debezium.test.IsbnConverter",
"isbn.schema.name: io.debezium.postgresql.type.Isbn"
]
| https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/debezium_user_guide/developing-debezium-custom-data-type-converters |
Chapter 3. Hybrid Cloud Console User Access | Chapter 3. Hybrid Cloud Console User Access The User Access feature is an implementation of role-based access control (RBAC) that controls access to various services hosted on the Red Hat Hybrid Cloud Console. Users with the Organization Administrator role use the User Access feature to grant other users access to services hosted on the Hybrid Cloud Console. An Organization Administrator can assign the special role User Access Administrator to other users who do not have the Organization Administrator role. Users with the User Access Administrator role can manage user access on the Red Hat Hybrid Cloud Console . User access on Red Hat Hybrid Cloud Console uses an additive model, which means that actions are only permitted, not denied. To control access, users with the Organization Administrator role assign the appropriate roles with the desired permissions to groups, then add users to those groups. The access permitted to an individual user is the sum of all roles assigned to all groups to which that user belongs. Additional resources For detailed information about the User Access feature for the Organization Administrator role, see the User Access Configuration Guide for Role-based Access Control (RBAC) with FedRAMP . For a list of quick starts about the User Access feature for the Organization Administrator role, see the Identity & Access Management Learning Resources page. 3.1. The User Access groups, roles, and permissions User Access uses the following categories to determine the level of user access that an Organization Administrator can grant to the supported Red Hat Hybrid Cloud Console services. The access provided to any authorized user depends on the group that the user belongs to and the roles assigned to that group. Group : A collection of users belonging to an account which provides the mapping of roles to users. An Organization Administrator can use groups to assign one or more roles to a group and to include one or more users in a group. You can create a group with no roles and no users. Roles : A set of permissions that provide access to a given service, such as Insights. The permissions to perform certain operations are assigned to specific roles. Roles are assigned to groups. For example, you might have a read role and a write role for a service. Adding both roles to a group grants all members of that group read and write permissions to that service. Permissions : A discrete action that can be requested of a service. Permissions are assigned to roles. 3.2. Viewing your permissions to services Your Organization Administrator grants and manages your access to the different services in the Red Hat Hybrid Cloud Console. You can view your permissions for each service on the console. Prerequisites You are logged in to the Hybrid Cloud Console. Procedure Click your user avatar in the upper right of the Red Hat Hybrid Cloud Console window. A drop-down list appears. Click My User Access . The My User Access page opens. Select a services group, for example Red Hat Enterprise Linux. A table of services appears. Your permissions are listed in the Operation column. | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/getting_started_with_the_red_hat_hybrid_cloud_console_with_fedramp/user-access_getting-started |
27.4. Making Files Accessible From the Console | 27.4. Making Files Accessible From the Console In /etc/security/console.perms , there is a section with lines like: You can add your own lines to this section, if necessary. Make sure that any lines you add refer to the appropriate device. For example, you could add the following line: (Of course, make sure that /dev/scanner is really your scanner and not, say, your hard drive.) That is the first step. The second step is to define what is done with those files. Look in the last section of /etc/security/console.perms for lines similar to: and add a line like: Then, when you log in at the console, you are given ownership of the /dev/scanner device with the permissions of 0600 (readable and writable by you only). When you log out, the device is owned by root and still has the permissions 0600 (now readable and writable by root only). | [
"<floppy>=/dev/fd[0-1]* /dev/floppy/* /mnt/floppy* <sound>=/dev/dsp* /dev/audio* /dev/midi* /dev/mixer* /dev/sequencer /dev/sound/* /dev/beep /dev/snd/* <cdrom>=/dev/cdrom* /dev/cdroms/* /dev/cdwriter* /mnt/cdrom*",
"<scanner>=/dev/scanner /dev/usb/scanner*",
"<console> 0660 <floppy> 0660 root.floppy <console> 0600 <sound> 0640 root <console> 0600 <cdrom> 0600 root.disk",
"<console> 0600 <scanner> 0600 root"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Console_Access-Making_Files_Accessible_From_the_Console |
Chapter 8. Monitoring the local disk to shut down Directory Server on low disk space | Chapter 8. Monitoring the local disk to shut down Directory Server on low disk space When the disk space available on a system becomes too small, the Directory Server process terminates. As a consequence, there is a risk of corrupting the database or losing data. To prevent this problem, you can configure Directory Server to monitor the free disk space on the file systems that contain the configuration, transaction log, and database directories. If the free space reaches the configured threshold, Directory Server shuts down the instance. 8.1. Behavior of Directory Server depending on the amount of free disk space How Directory Server behaves when when you configure monitoring depends on the amount of remaining free space: If the free disk space reaches the defined threshold, Directory Server: Disables verbose logging Disables access access logging Deletes archived log files Note Directory Server always continues writing error logs, even if the threshold is reached. If the free disk space is lower than the half of the configured threshold, Directory Server shuts down within a defined grace period. If the available disk space is ever lower than 4 KB, Directory Server shuts down immediately. If disk space is freed up, then Directory Server aborts the shutdown process and re-enables all of the previously disabled log settings. 8.2. Configuring local disk monitoring using the command line Directory Server can monitor the free disk space on the file systems that contain the configuration, transaction log, and database directories. Depending on the remaining free space, Directory Server disables certain logging features or shuts down. Procedure Enable the disk monitoring feature, set a threshold value and a grace period: # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace nsslapd-disk-monitoring= on nsslapd-disk-monitoring-threshold= 3221225472 nsslapd-disk-monitoring-grace-period= 60 This command sets the threshold of free disk space to 3 GB (3,221,225,472 bytes) and the grace period to 60 seconds. Optional: Configure Directory Server not to disable access logging or delete archived logs: # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace nsslapd-disk-monitoring-logging-critical= on Restart the instance: # dsctl instance_name restart 8.3. Configuring local disk monitoring using the web console Directory Server can monitor the free disk space on the file systems that contain the configuration, transaction log, and database directories. Depending on the remaining free space, Directory Server disables certain logging features or shuts down. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Server Server Settings Disk Monitoring . Select Enable Disk Space Monitoring . Set the threshold in bytes and the grace period in minutes: This example sets the monitoring threshold to 3 GB (3,221,225,472 bytes) and the time before Directory Server shuts down the instance after reaching the threshold to 60 minutes. Optional: Select Preserve Logs Even If Disk Space Gets Low Click Save Settings . Click Actions in the top right corner, and select Restart Instance . | [
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace nsslapd-disk-monitoring= on nsslapd-disk-monitoring-threshold= 3221225472 nsslapd-disk-monitoring-grace-period= 60",
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace nsslapd-disk-monitoring-logging-critical= on",
"dsctl instance_name restart"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/tuning_the_performance_of_red_hat_directory_server/assembly_monitoring-the-local-disk-to-shut-down-directory-server-on-low-disk-space_assembly_improving-the-performance-of-views |
Red Hat Ansible Automation Platform automation mesh guide | Red Hat Ansible Automation Platform automation mesh guide Red Hat Ansible Automation Platform 2.3 This guide provides information used to deploy automation mesh as part of your Ansible Automation Platform environment Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_automation_mesh_guide/index |
function::cpuid | function::cpuid Name function::cpuid - Returns the current cpu number Synopsis Arguments None Description This function returns the current cpu number. Deprecated in SystemTap 1.4 and removed in SystemTap 1.5. | [
"cpuid:long()"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-cpuid |
Chapter 5. Deploying standalone Multicloud Object Gateway | Chapter 5. Deploying standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 5.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and select the same. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator by using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 5.3. Creating standalone Multicloud Object Gateway on IBM Z You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. To identify storage devices on each node, see Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace, click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for filesystem PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option for Backing storage type . Select the Storage Class that you used while installing LocalVolume. Click . Optional: In the Security page, select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) noobaa-default-backing-store-noobaa-pod-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_z/deploy-standalone-multicloud-object-gateway-ibm-z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.