title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Configuring certificates | Chapter 3. Configuring certificates 3.1. Replacing the default ingress certificate 3.1.1. Understanding the default ingress certificate By default, OpenShift Container Platform uses the Ingress Operator to create an internal CA and issue a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The internal infrastructure CA certificates are self-signed. While this process might be perceived as bad practice by some security or PKI teams, any risk here is minimal. The only clients that implicitly trust these certificates are other components within the cluster. Replacing the default wildcard certificate with one that is issued by a public CA already included in the CA bundle as provided by the container userspace allows external clients to connect securely to applications running under the .apps sub-domain. 3.1.2. Replacing the default ingress certificate You can replace the default ingress certificate for all applications under the .apps subdomain. After you replace the certificate, all applications, including the web console and CLI, will have encryption provided by specified certificate. Prerequisites You must have a wildcard certificate for the fully qualified .apps subdomain and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing *.apps.<clustername>.<domain> . The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Copy the root CA certificate into an additional PEM format file. Procedure Create a config map that includes only the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the root CA certificate file on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Create a secret that contains the wildcard certificate chain and key: USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-ingress 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the Ingress Controller configuration with the newly created secret: USD oc patch ingresscontroller.operator default \ --type=merge -p \ '{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \ 1 -n openshift-ingress-operator 1 Replace <secret> with the name used for the secret in the step. Additional resources Replacing the CA Bundle certificate Proxy certificate customization 3.2. Adding API server certificates The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. Clients outside of the cluster will not be able to verify the API server's certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust. 3.2.1. Add an API server named certificate The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. You can add one or more alternative certificates that the API server will return based on the fully qualified domain name (FQDN) requested by the client, for example when a reverse proxy or load balancer is used. Prerequisites You must have a certificate for the FQDN and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing the FQDN. The certificate file can contain one or more certificates in a chain. The certificate for the API server FQDN must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Warning Do not provide a named certificate for the internal load balancer (host name api-int.<cluster_name>.<base_domain> ). Doing so will leave your cluster in a degraded state. Procedure Login to the new API as the kubeadmin user. USD oc login -u kubeadmin -p <password> https://FQDN:6443 Get the kubeconfig file. USD oc config view --flatten > kubeconfig-newapi Create a secret that contains the certificate chain and private key in the openshift-config namespace. USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-config 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the API server to reference the created secret. USD oc patch apiserver cluster \ --type=merge -p \ '{"spec":{"servingCerts": {"namedCertificates": [{"names": ["<FQDN>"], 1 "servingCertificate": {"name": "<secret>"}}]}}}' 2 1 Replace <FQDN> with the FQDN that the API server should provide the certificate for. 2 Replace <secret> with the name used for the secret in the step. Examine the apiserver/cluster object and confirm the secret is now referenced. USD oc get apiserver cluster -o yaml Example output ... spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret> ... Check the kube-apiserver operator, and verify that a new revision of the Kubernetes API server rolls out. It may take a minute for the operator to detect the configuration change and trigger a new deployment. While the new revision is rolling out, PROGRESSING will report True . USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.7.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. 3.3. Securing service traffic using service serving certificate secrets 3.3.1. Understanding service serving certificates Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates. The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates. The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration. The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than 13 months validity left. After rotation, the service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the service CA expires. Note You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done 3.3.2. Add a service certificate To secure communication to your service, generate a signed serving certificate and key pair into a secret in the same namespace as the service. Important The generated certificate is only valid for the internal service DNS name <service.name>.<service.namespace>.svc , and are only valid for internal communications. Prerequisites: You must have a service defined. Procedure Annotate the service with service.beta.openshift.io/serving-cert-secret-name : USD oc annotate service <service_name> \ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2 1 Replace <service_name> with the name of the service to secure. 2 <secret_name> will be the name of the generated secret containing the certificate and key pair. For convenience, it is recommended that this be the same as <service_name> . For example, use the following command to annotate the service test1 : USD oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1 Examine the service to confirm that the annotations are present: USD oc describe service <service_name> Example output ... Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837 ... After the cluster generates a secret for your service, your Pod spec can mount it, and the pod will run after it becomes available. Additional resources You can use a service certificate to configure a secure route using reencrypt TLS termination. For more information, see Creating a re-encrypt route with a custom certificate . 3.3.3. Add the service CA bundle to a config map A pod can access the service CA certificate by mounting a ConfigMap object that is annotated with service.beta.openshift.io/inject-cabundle=true . Once annotated, the cluster automatically injects the service CA certificate into the service-ca.crt key on the config map. Access to this CA certificate allows TLS clients to verify connections to services using service serving certificates. Important After adding this annotation to a config map all existing data in it is deleted. It is recommended to use a separate config map to contain the service-ca.crt , instead of using the same config map that stores your pod configuration. Procedure Annotate the config map with service.beta.openshift.io/inject-cabundle=true : USD oc annotate configmap <config_map_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <config_map_name> with the name of the config map to annotate. Note Explicitly referencing the service-ca.crt key in a volume mount will prevent a pod from starting until the config map has been injected with the CA bundle. This behavior can be overridden by setting the optional field to true for the volume's serving certificate configuration. For example, use the following command to annotate the config map test1 : USD oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true View the config map to ensure that the service CA bundle has been injected: USD oc get configmap <config_map_name> -o yaml The CA bundle is displayed as the value of the service-ca.crt key in the YAML output: apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- ... 3.3.4. Add the service CA bundle to an API service You can annotate an APIService object with service.beta.openshift.io/inject-cabundle=true to have its spec.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Procedure Annotate the API service with service.beta.openshift.io/inject-cabundle=true : USD oc annotate apiservice <api_service_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <api_service_name> with the name of the API service to annotate. For example, use the following command to annotate the API service test1 : USD oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true View the API service to ensure that the service CA bundle has been injected: USD oc get apiservice <api_service_name> -o yaml The CA bundle is displayed in the spec.caBundle field in the YAML output: apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: caBundle: <CA_BUNDLE> ... 3.3.5. Add the service CA bundle to a custom resource definition You can annotate a CustomResourceDefinition (CRD) object with service.beta.openshift.io/inject-cabundle=true to have its spec.conversion.webhook.clientConfig.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note The service CA bundle will only be injected into the CRD if the CRD is configured to use a webhook for conversion. It is only useful to inject the service CA bundle if a CRD's webhook is secured with a service CA certificate. Procedure Annotate the CRD with service.beta.openshift.io/inject-cabundle=true : USD oc annotate crd <crd_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <crd_name> with the name of the CRD to annotate. For example, use the following command to annotate the CRD test1 : USD oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true View the CRD to ensure that the service CA bundle has been injected: USD oc get crd <crd_name> -o yaml The CA bundle is displayed in the spec.conversion.webhook.clientConfig.caBundle field in the YAML output: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE> ... 3.3.6. Add the service CA bundle to a mutating webhook configuration You can annotate a MutatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the mutating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <mutating_webhook_name> with the name of the mutating webhook configuration to annotate. For example, use the following command to annotate the mutating webhook configuration test1 : USD oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the mutating webhook configuration to ensure that the service CA bundle has been injected: USD oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.7. Add the service CA bundle to a validating webhook configuration You can annotate a ValidatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the validating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate validatingwebhookconfigurations <validating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <validating_webhook_name> with the name of the validating webhook configuration to annotate. For example, use the following command to annotate the validating webhook configuration test1 : USD oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the validating webhook configuration to ensure that the service CA bundle has been injected: USD oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.8. Manually rotate the generated service certificate You can rotate the service certificate by deleting the associated secret. Deleting the secret results in a new one being automatically created, resulting in a new certificate. Prerequisites A secret containing the certificate and key pair must have been generated for the service. Procedure Examine the service to determine the secret containing the certificate. This is found in the serving-cert-secret-name annotation, as seen below. USD oc describe service <service_name> Example output ... service.beta.openshift.io/serving-cert-secret-name: <secret> ... Delete the generated secret for the service. This process will automatically recreate the secret. USD oc delete secret <secret> 1 1 Replace <secret> with the name of the secret from the step. Confirm that the certificate has been recreated by obtaining the new secret and examining the AGE . USD oc get secret <service_name> Example output NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s 3.3.9. Manually rotate the service CA certificate The service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA by using the following procedure. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. Prerequisites You must be logged in as a cluster admin. Procedure View the expiration date of the current service CA certificate by using the following command. USD oc get secrets/signing-key -n openshift-service-ca \ -o template='{{index .data "tls.crt"}}' \ | base64 --decode \ | openssl x509 -noout -enddate Manually rotate the service CA. This process generates a new service CA which will be used to sign the new service certificates. USD oc delete secret/signing-key -n openshift-service-ca To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Warning This command will cause a service interruption, as it goes through and deletes every running pod in every namespace. These pods will automatically restart after they are deleted. 3.4. Updating the CA bundle 3.4.1. Understanding the CA Bundle certificate Proxy certificates allow users to specify one or more custom certificate authority (CA) used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 3.4.2. Replacing the CA Bundle certificate Procedure Create a config map that includes the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the CA certificate bundle on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Additional resources Replacing the default ingress certificate Enabling the cluster-wide proxy Proxy certificate customization | [
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress",
"oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator",
"oc login -u kubeadmin -p <password> https://FQDN:6443",
"oc config view --flatten > kubeconfig-newapi",
"oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config",
"oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2",
"oc get apiserver cluster -o yaml",
"spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>",
"oc get clusteroperators kube-apiserver",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.7.0 True False False 145m",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2",
"oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1",
"oc describe service <service_name>",
"Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837",
"oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true",
"oc get configmap <config_map_name> -o yaml",
"apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----",
"oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true",
"oc get apiservice <api_service_name> -o yaml",
"apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>",
"oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true",
"oc get crd <crd_name> -o yaml",
"apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true",
"oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true",
"oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml",
"apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>",
"oc describe service <service_name>",
"service.beta.openshift.io/serving-cert-secret-name: <secret>",
"oc delete secret <secret> 1",
"oc get secret <service_name>",
"NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s",
"oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate",
"oc delete secret/signing-key -n openshift-service-ca",
"for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done",
"apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----",
"oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config",
"oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/security_and_compliance/configuring-certificates |
Chapter 2. OpenID Connect client and token propagation quickstart | Chapter 2. OpenID Connect client and token propagation quickstart Learn how to use OpenID Connect (OIDC) and OAuth2 clients with filters to get, refresh, and propagate access tokens in your applications. For more information about OIDC Client and Token Propagation support in Quarkus, see the OpenID Connect (OIDC) and OAuth2 client and filters reference guide . To protect your applications by using Bearer Token Authorization, see the OpenID Connect (OIDC) Bearer token authentication guide. 2.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.9.6 A working container runtime (Docker or Podman ) Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) jq tool 2.2. Architecture In this example, an application is built with two Jakarta REST resources, FrontendResource and ProtectedResource . Here, FrontendResource uses one of two methods to propagate access tokens to ProtectedResource : It can get a token by using an OIDC token propagation Reactive filter before propagating it. It can use an OIDC token propagation Reactive filter to propagate the incoming access token. FrontendResource has four endpoints: /frontend/user-name-with-oidc-client-token /frontend/admin-name-with-oidc-client-token /frontend/user-name-with-propagated-token /frontend/admin-name-with-propagated-token FrontendResource uses a REST Client with an OIDC token propagation Reactive filter to get and propagate an access token to ProtectedResource when either /frontend/user-name-with-oidc-client-token or /frontend/admin-name-with-oidc-client-token is called. Also, FrontendResource uses a REST Client with OpenID Connect Token Propagation Reactive Filter to propagate the current incoming access token to ProtectedResource when either /frontend/user-name-with-propagated-token or /frontend/admin-name-with-propagated-token is called. ProtectedResource has two endpoints: /protected/user-name /protected/admin-name Both endpoints return the username extracted from the incoming access token, which was propagated to ProtectedResource from FrontendResource . The only difference between these endpoints is that calling /protected/user-name is only allowed if the current access token has a user role, and calling /protected/admin-name is only allowed if the current access token has an admin role. 2.3. Solution We recommend that you follow the instructions in the sections and create the application step by step. However, you can go right to the completed example. Clone the Git repository: git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.8 , or download an archive . The solution is in the security-openid-connect-client-quickstart directory . 2.4. Creating the Maven project First, you need a new project. Create a new project with the following command: Using the Quarkus CLI: quarkus create app org.acme:security-openid-connect-client-quickstart \ --extension='oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive' \ --no-code cd security-openid-connect-client-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn io.quarkus.platform:quarkus-maven-plugin:3.8.5:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-openid-connect-client-quickstart \ -Dextensions='oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive' \ -DnoCode cd security-openid-connect-client-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-openid-connect-client-quickstart" This command generates a Maven project, importing the oidc , oidc-client-reactive-filter , oidc-token-propagation-reactive-filter , and resteasy-reactive extensions. If you already have your Quarkus project configured, you can add these extensions to your project by running the following command in your project base directory: Using the Quarkus CLI: quarkus extension add oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive Using Maven: ./mvnw quarkus:add-extension -Dextensions='oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive' Using Gradle: ./gradlew addExtension --extensions='oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive' This command adds the following extensions to your build file: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc-client-reactive-filter</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc-token-propagation-reactive</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-reactive</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive") 2.5. Writing the application Start by implementing ProtectedResource : package org.acme.security.openid.connect.client; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.security.Authenticated; import io.smallrye.mutiny.Uni; import org.eclipse.microprofile.jwt.JsonWebToken; @Path("/protected") @Authenticated public class ProtectedResource { @Inject JsonWebToken principal; @GET @RolesAllowed("user") @Produces("text/plain") @Path("userName") public Uni<String> userName() { return Uni.createFrom().item(principal.getName()); } @GET @RolesAllowed("admin") @Produces("text/plain") @Path("adminName") public Uni<String> adminName() { return Uni.createFrom().item(principal.getName()); } } ProtectedResource returns a name from both userName() and adminName() methods. The name is extracted from the current JsonWebToken . , add two REST clients, OidcClientRequestReactiveFilter and AccessTokenRequestReactiveFilter , which FrontendResource uses to call ProtectedResource . Add the OidcClientRequestReactiveFilter REST Client: package org.acme.security.openid.connect.client; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.reactive.filter.OidcClientRequestReactiveFilter; import io.smallrye.mutiny.Uni; @RegisterRestClient @RegisterProvider(OidcClientRequestReactiveFilter.class) @Path("/") public interface RestClientWithOidcClientFilter { @GET @Produces("text/plain") @Path("userName") Uni<String> getUserName(); @GET @Produces("text/plain") @Path("adminName") Uni<String> getAdminName(); } The RestClientWithOidcClientFilter interface depends on OidcClientRequestReactiveFilter to get and propagate the tokens. Add the AccessTokenRequestReactiveFilter REST Client: package org.acme.security.openid.connect.client; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.reactive.AccessTokenRequestReactiveFilter; import io.smallrye.mutiny.Uni; @RegisterRestClient @RegisterProvider(AccessTokenRequestReactiveFilter.class) @Path("/") public interface RestClientWithTokenPropagationFilter { @GET @Produces("text/plain") @Path("userName") Uni<String> getUserName(); @GET @Produces("text/plain") @Path("adminName") Uni<String> getAdminName(); } The RestClientWithTokenPropagationFilter interface depends on AccessTokenRequestReactiveFilter to propagate the incoming already-existing tokens. Note that both RestClientWithOidcClientFilter and RestClientWithTokenPropagationFilter interfaces are the same. This is because combining OidcClientRequestReactiveFilter and AccessTokenRequestReactiveFilter on the same REST Client causes side effects because both filters can interfere with each other. For example, OidcClientRequestReactiveFilter can override the token propagated by AccessTokenRequestReactiveFilter , or AccessTokenRequestReactiveFilter can fail if it is called when no token is available to propagate and OidcClientRequestReactiveFilter is expected to get a new token instead. Now, finish creating the application by adding FrontendResource : package org.acme.security.openid.connect.client; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.inject.RestClient; import io.smallrye.mutiny.Uni; @Path("/frontend") public class FrontendResource { @Inject @RestClient RestClientWithOidcClientFilter restClientWithOidcClientFilter; @Inject @RestClient RestClientWithTokenPropagationFilter restClientWithTokenPropagationFilter; @GET @Path("user-name-with-oidc-client-token") @Produces("text/plain") public Uni<String> getUserNameWithOidcClientToken() { return restClientWithOidcClientFilter.getUserName(); } @GET @Path("admin-name-with-oidc-client-token") @Produces("text/plain") public Uni<String> getAdminNameWithOidcClientToken() { return restClientWithOidcClientFilter.getAdminName(); } @GET @Path("user-name-with-propagated-token") @Produces("text/plain") public Uni<String> getUserNameWithPropagatedToken() { return restClientWithTokenPropagationFilter.getUserName(); } @GET @Path("admin-name-with-propagated-token") @Produces("text/plain") public Uni<String> getAdminNameWithPropagatedToken() { return restClientWithTokenPropagationFilter.getAdminName(); } } FrontendResource uses REST Client with an OIDC token propagation Reactive filter to get and propagate an access token to ProtectedResource when either /frontend/user-name-with-oidc-client-token or /frontend/admin-name-with-oidc-client-token is called. Also, FrontendResource uses REST Client with OpenID Connect Token Propagation Reactive Filter to propagate the current incoming access token to ProtectedResource when either /frontend/user-name-with-propagated-token or /frontend/admin-name-with-propagated-token is called. Finally, add a Jakarta REST ExceptionMapper : package org.acme.security.openid.connect.client; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; import org.jboss.resteasy.reactive.ClientWebApplicationException; @Provider public class FrontendExceptionMapper implements ExceptionMapper<ClientWebApplicationException> { @Override public Response toResponse(ClientWebApplicationException t) { return Response.status(t.getResponse().getStatus()).build(); } } This exception mapper is only added to verify during the tests that ProtectedResource returns 403 when the token has no expected role. Without this mapper, RESTEasy Reactive would correctly convert the exceptions that escape from REST Client calls to 500 to avoid leaking the information from the downstream resources such as ProtectedResource . However, in the tests, it would not be possible to assert that 500 is caused by an authorization exception instead of some internal error. 2.6. Configuring the application Having prepared the code, you configure the application: # Configure OIDC %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=backend-service quarkus.oidc.credentials.secret=secret # Tell Dev Services for Keycloak to import the realm file # This property is ineffective when running the application in JVM or Native modes but only in dev and test modes. quarkus.keycloak.devservices.realm-path=quarkus-realm.json # Configure OIDC Client quarkus.oidc-client.auth-server-url=USD{quarkus.oidc.auth-server-url} quarkus.oidc-client.client-id=USD{quarkus.oidc.client-id} quarkus.oidc-client.credentials.secret=USD{quarkus.oidc.credentials.secret} quarkus.oidc-client.grant.type=password quarkus.oidc-client.grant-options.password.username=alice quarkus.oidc-client.grant-options.password.password=alice # Configure REST clients %prod.port=8080 %dev.port=8080 %test.port=8081 org.acme.security.openid.connect.client.RestClientWithOidcClientFilter/mp-rest/url=http://localhost:USD{port}/protected org.acme.security.openid.connect.client.RestClientWithTokenPropagationFilter/mp-rest/url=http://localhost:USD{port}/protected This configuration references Keycloak, which is used by ProtectedResource to verify the incoming access tokens and by OidcClient to get the tokens for a user alice by using a password grant. Both REST clients point to `ProtectedResource's HTTP address. Note Adding a %prod. profile prefix to quarkus.oidc.auth-server-url ensures that Dev Services for Keycloak launches a container for you when the application is run in dev or test modes. For more information, see the Running the application in dev mode section. 2.7. Starting and configuring the Keycloak server Note Do not start the Keycloak server when you run the application in dev or test modes; Dev Services for Keycloak launches a container. For more information, see the Running the application in dev mode section. Ensure you put the realm configuration file on the classpath, in the target/classes directory. This placement ensures that the file is automatically imported in dev mode. However, if you have already built a complete solution , you do not need to add the realm file to the classpath because the build process has already done so. To start a Keycloak Server, you can use Docker and just run the following command: docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev Set {keycloak.version} to 24.0.0 or later. You can access your Keycloak Server at localhost:8180 . Log in as the admin user to access the Keycloak Administration Console. The password is admin . Import the realm configuration file to create a new realm. For more details, see the Keycloak documentation about how to create a new realm . This quarkus realm file adds a frontend client, and alice and admin users. alice has a user role. admin has both user and admin roles. 2.8. Running the application in dev mode To run the application in a dev mode, use: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev Dev Services for Keycloak launches a Keycloak container and imports quarkus-realm.json . Open a Dev UI available at /q/dev-ui and click a Provider: Keycloak link in the OpenID Connect Dev UI card. When asked, log in to a Single Page Application provided by the OpenID Connect Dev UI: Log in as alice , with the password, alice . This user has a user role. Access /frontend/user-name-with-propagated-token , which returns 200 . Access /frontend/admin-name-with-propagated-token , which returns 403 . Log out and back in as admin with the password, admin . This user has both admin and user roles. Access /frontend/user-name-with-propagated-token , which returns 200 . Access /frontend/admin-name-with-propagated-token , which returns 200 . In this case, you are testing that FrontendResource can propagate the access tokens from the OpenID Connect Dev UI. 2.9. Running the application in JVM mode After exploring the application in dev mode, you can run it as a standard Java application. First, compile it: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Then, run it: java -jar target/quarkus-app/quarkus-run.jar 2.10. Running the application in native mode You can compile this demo into native code; no modifications are required. This implies that you no longer need to install a JVM on your production environment, as the runtime technology is included in the produced binary and optimized to run with minimal resources. Compilation takes longer, so this step is turned off by default. To build again, enable the native profile: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.package.type=native After a little while, when the build finishes, you can run the native binary directly: ./target/security-openid-connect-quickstart-1.0.0-SNAPSHOT-runner 2.11. Testing the application For more information about testing your application in dev mode, see the preceding Running the application in dev mode section. You can test the application launched in JVM or Native modes with curl . Obtain an access token for alice : export access_token=USD(\ curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token \ --user backend-service:secret \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' \ ) Now, use this token to call /frontend/user-name-with-propagated-token and /frontend/admin-name-with-propagated-token : curl -i -X GET \ http://localhost:8080/frontend/user-name-with-propagated-token \ -H "Authorization: Bearer "USDaccess_token This command returns the 200 status code and the name alice . curl -i -X GET \ http://localhost:8080/frontend/admin-name-with-propagated-token \ -H "Authorization: Bearer "USDaccess_token In contrast, this command returns 403 . Recall that alice only has a user role. , obtain an access token for admin : export access_token=USD(\ curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token \ --user backend-service:secret \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' \ ) Use this token to call /frontend/user-name-with-propagated-token : curl -i -X GET \ http://localhost:8080/frontend/user-name-with-propagated-token \ -H "Authorization: Bearer "USDaccess_token This command returns a 200 status code and the name admin . Now, use this token to call /frontend/admin-name-with-propagated-token : curl -i -X GET \ http://localhost:8080/frontend/admin-name-with-propagated-token \ -H "Authorization: Bearer "USDaccess_token This command also returns the 200 status code and the name admin because admin has both user and admin roles. Now, check the FrontendResource methods, which do not propagate the existing tokens but use OidcClient to get and propagate the tokens. As already shown, OidcClient is configured to get the tokens for the alice user, so: curl -i -X GET \ http://localhost:8080/frontend/user-name-with-oidc-client-token This command returns the 200 status code and the name alice . curl -i -X GET \ http://localhost:8080/frontend/admin-name-with-oidc-client-token In contrast with the preceding command, this command returns a 403 status code. 2.12. References OpenID Connect Client and Token Propagation Reference Guide OIDC Bearer token authentication Quarkus Security overview | [
"quarkus create app org.acme:security-openid-connect-client-quickstart --extension='oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive' --no-code cd security-openid-connect-client-quickstart",
"mvn io.quarkus.platform:quarkus-maven-plugin:3.8.5:create -DprojectGroupId=org.acme -DprojectArtifactId=security-openid-connect-client-quickstart -Dextensions='oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive' -DnoCode cd security-openid-connect-client-quickstart",
"quarkus extension add oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive",
"./mvnw quarkus:add-extension -Dextensions='oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive'",
"./gradlew addExtension --extensions='oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive'",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc-client-reactive-filter</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc-token-propagation-reactive</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-reactive</artifactId> </dependency>",
"implementation(\"io.quarkus:quarkus-oidc,oidc-client-reactive-filter,oidc-token-propagation-reactive,resteasy-reactive\")",
"package org.acme.security.openid.connect.client; import jakarta.annotation.security.RolesAllowed; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.security.Authenticated; import io.smallrye.mutiny.Uni; import org.eclipse.microprofile.jwt.JsonWebToken; @Path(\"/protected\") @Authenticated public class ProtectedResource { @Inject JsonWebToken principal; @GET @RolesAllowed(\"user\") @Produces(\"text/plain\") @Path(\"userName\") public Uni<String> userName() { return Uni.createFrom().item(principal.getName()); } @GET @RolesAllowed(\"admin\") @Produces(\"text/plain\") @Path(\"adminName\") public Uni<String> adminName() { return Uni.createFrom().item(principal.getName()); } }",
"package org.acme.security.openid.connect.client; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.reactive.filter.OidcClientRequestReactiveFilter; import io.smallrye.mutiny.Uni; @RegisterRestClient @RegisterProvider(OidcClientRequestReactiveFilter.class) @Path(\"/\") public interface RestClientWithOidcClientFilter { @GET @Produces(\"text/plain\") @Path(\"userName\") Uni<String> getUserName(); @GET @Produces(\"text/plain\") @Path(\"adminName\") Uni<String> getAdminName(); }",
"package org.acme.security.openid.connect.client; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.reactive.AccessTokenRequestReactiveFilter; import io.smallrye.mutiny.Uni; @RegisterRestClient @RegisterProvider(AccessTokenRequestReactiveFilter.class) @Path(\"/\") public interface RestClientWithTokenPropagationFilter { @GET @Produces(\"text/plain\") @Path(\"userName\") Uni<String> getUserName(); @GET @Produces(\"text/plain\") @Path(\"adminName\") Uni<String> getAdminName(); }",
"package org.acme.security.openid.connect.client; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.rest.client.inject.RestClient; import io.smallrye.mutiny.Uni; @Path(\"/frontend\") public class FrontendResource { @Inject @RestClient RestClientWithOidcClientFilter restClientWithOidcClientFilter; @Inject @RestClient RestClientWithTokenPropagationFilter restClientWithTokenPropagationFilter; @GET @Path(\"user-name-with-oidc-client-token\") @Produces(\"text/plain\") public Uni<String> getUserNameWithOidcClientToken() { return restClientWithOidcClientFilter.getUserName(); } @GET @Path(\"admin-name-with-oidc-client-token\") @Produces(\"text/plain\") public Uni<String> getAdminNameWithOidcClientToken() { return restClientWithOidcClientFilter.getAdminName(); } @GET @Path(\"user-name-with-propagated-token\") @Produces(\"text/plain\") public Uni<String> getUserNameWithPropagatedToken() { return restClientWithTokenPropagationFilter.getUserName(); } @GET @Path(\"admin-name-with-propagated-token\") @Produces(\"text/plain\") public Uni<String> getAdminNameWithPropagatedToken() { return restClientWithTokenPropagationFilter.getAdminName(); } }",
"package org.acme.security.openid.connect.client; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; import org.jboss.resteasy.reactive.ClientWebApplicationException; @Provider public class FrontendExceptionMapper implements ExceptionMapper<ClientWebApplicationException> { @Override public Response toResponse(ClientWebApplicationException t) { return Response.status(t.getResponse().getStatus()).build(); } }",
"Configure OIDC %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=backend-service quarkus.oidc.credentials.secret=secret Tell Dev Services for Keycloak to import the realm file This property is ineffective when running the application in JVM or Native modes but only in dev and test modes. quarkus.keycloak.devservices.realm-path=quarkus-realm.json Configure OIDC Client quarkus.oidc-client.auth-server-url=USD{quarkus.oidc.auth-server-url} quarkus.oidc-client.client-id=USD{quarkus.oidc.client-id} quarkus.oidc-client.credentials.secret=USD{quarkus.oidc.credentials.secret} quarkus.oidc-client.grant.type=password quarkus.oidc-client.grant-options.password.username=alice quarkus.oidc-client.grant-options.password.password=alice Configure REST clients %prod.port=8080 %dev.port=8080 %test.port=8081 org.acme.security.openid.connect.client.RestClientWithOidcClientFilter/mp-rest/url=http://localhost:USD{port}/protected org.acme.security.openid.connect.client.RestClientWithTokenPropagationFilter/mp-rest/url=http://localhost:USD{port}/protected",
"docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev",
"quarkus dev",
"./mvnw quarkus:dev",
"./gradlew --console=plain quarkusDev",
"quarkus build",
"./mvnw install",
"./gradlew build",
"java -jar target/quarkus-app/quarkus-run.jar",
"quarkus build --native",
"./mvnw install -Dnative",
"./gradlew build -Dquarkus.package.type=native",
"./target/security-openid-connect-quickstart-1.0.0-SNAPSHOT-runner",
"export access_token=USD( curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token --user backend-service:secret -H 'content-type: application/x-www-form-urlencoded' -d 'username=alice&password=alice&grant_type=password' | jq --raw-output '.access_token' )",
"curl -i -X GET http://localhost:8080/frontend/user-name-with-propagated-token -H \"Authorization: Bearer \"USDaccess_token",
"curl -i -X GET http://localhost:8080/frontend/admin-name-with-propagated-token -H \"Authorization: Bearer \"USDaccess_token",
"export access_token=USD( curl --insecure -X POST http://localhost:8180/realms/quarkus/protocol/openid-connect/token --user backend-service:secret -H 'content-type: application/x-www-form-urlencoded' -d 'username=admin&password=admin&grant_type=password' | jq --raw-output '.access_token' )",
"curl -i -X GET http://localhost:8080/frontend/user-name-with-propagated-token -H \"Authorization: Bearer \"USDaccess_token",
"curl -i -X GET http://localhost:8080/frontend/admin-name-with-propagated-token -H \"Authorization: Bearer \"USDaccess_token",
"curl -i -X GET http://localhost:8080/frontend/user-name-with-oidc-client-token",
"curl -i -X GET http://localhost:8080/frontend/admin-name-with-oidc-client-token"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/openid_connect_oidc_client_and_token_propagation/security-openid-connect-client |
Chapter 3. Getting started with OpenShift Virtualization | Chapter 3. Getting started with OpenShift Virtualization You can install and configure a basic OpenShift Virtualization environment to explore its features and functionality. Note Cluster configuration procedures require cluster-admin privileges. 3.1. Before you begin Prepare your cluster for OpenShift Virtualization. Review the storage requirements for cloning, snapshots, and live migration. Install the OpenShift Virtualization Operator . Install the virtctl tool . 3.1.1. Additional resources Using a CSI-enabled storage provider . Configuring local storage for virtual machines. About the Kubernetes NMState Operator . Specifying nodes for virtual machines . 3.2. Getting started Create a virtual machine: Quick create a virtual machine using the web console. Create and customize Windows boot sources . Install VirtIO drivers and the QEMU guest agent on the virtual machine. Connect to a virtual machine: Connect to a virtual machine Connect to the serial console or VNC console of a virtual machine using the web console. Connect to a virtual machine using SSH . Connect to a Windows virtual machine using RDP . Manage a virtual machine Stop, start, pause, and restart a virtual machine from the web console . Manage a virtual machine, expose a port, and connect to the serial console of a virtual machine from the command line with virtctl . 3.3. steps Connect VMs to secondary networks Connect a virtual machine to a Linux bridge network . Connect a virtual machine to an SR-IOV network . Monitor your OpenShift Virtualization environment Monitor resources, details, status, and top consumers on the Virtualization Overview page . View high-level information about your virtual machines on the Virtual Machines dashboard . View virtual machine logs . Automating deployments Automate Windows virtual machine deployments with sysprep . 3.3.1. Additional resources Creating virtual machine templates Live migration Virtual machine templates Configuring local storage Backup and restore | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/virtualization/getting-started-with-openshift-virtualization |
Managing networking resources | Managing networking resources Red Hat OpenStack Services on OpenShift 18.0 Managing network resources by using the Networking service (neutron) in a Red Hat OpenStack Services on OpenShift environment OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/managing_networking_resources/index |
Chapter 3. Defining Camel routes | Chapter 3. Defining Camel routes Camel Extensions for Quarkus supports the Java DSL language to define Camel Routes. 3.1. Java DSL Extending org.apache.camel.builder.RouteBuilder and using the fluent builder methods available there is the most common way of defining Camel Routes. Here is a simple example of a route using the timer component: import org.apache.camel.builder.RouteBuilder; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from("timer:foo?period=1000") .log("Hello World"); } } 3.1.1. Endpoint DSL Since Camel 3.0, you can use fluent builders also for defining Camel endpoints. The following is equivalent with the example: import org.apache.camel.builder.RouteBuilder; import static org.apache.camel.builder.endpoint.StaticEndpointBuilders.timer; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(timer("foo").period(1000)) .log("Hello World"); } } Note Builder methods for all Camel components are available via camel-quarkus-core , but you still need to add the given component's extension as a dependency for the route to work properly. In case of the above example, it would be camel-quarkus-timer . | [
"import org.apache.camel.builder.RouteBuilder; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:foo?period=1000\") .log(\"Hello World\"); } }",
"import org.apache.camel.builder.RouteBuilder; import static org.apache.camel.builder.endpoint.StaticEndpointBuilders.timer; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(timer(\"foo\").period(1000)) .log(\"Hello World\"); } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/developing_applications_with_camel_extensions_for_quarkus/defining_camel_routes |
Chapter 9. Technology Previews | Chapter 9. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 9. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 9.1. Installer and image creation NVMe over Fibre Channel devices are now available in RHEL installation program as a Technology Preview You can now add NVMe over Fibre Channel devices to your RHEL installation as a Technology Preview. In RHEL installation program, you can select these devices under the NVMe Fabrics Devices section while adding disks on the Installation Destination screen. Bugzilla:2107346 9.2. Security gnutls now uses kTLS as a Technology Preview The updated gnutls packages can use kernel TLS (kTLS) for accelerating data transfer on encrypted channels as a Technology Preview. To enable kTLS, add the tls.ko kernel module using the modprobe command, and create a new configuration file /etc/crypto-policies/local.d/gnutls-ktls.txt for the system-wide cryptographic policies with the following content: Note that the current version does not support updating traffic keys through TLS KeyUpdate messages, which impacts the security of AES-GCM ciphersuites. See the RFC 7841 - TLS 1.3 document for more information. Bugzilla:2108532 [1] 9.3. Shells and command-line tools GIMP available as a Technology Preview in RHEL 9 GNU Image Manipulation Program (GIMP) 2.99.8 is now available in RHEL 9 as a Technology Preview. The gimp package version 2.99.8 is a pre-release version with a set of improvements, but a limited set of features and no guarantee for stability. As soon as the official GIMP 3 is released, it will be introduced into RHEL 9 as an update of this pre-release version. In RHEL 9, you can install gimp easily as an RPM package. Bugzilla:2047161 [1] 9.4. Infrastructure services Socket API for TuneD available as a Technology Preview The socket API for controlling TuneD through a UNIX domain socket is now available as a Technology Preview. The socket API maps one-to-one with the D-Bus API and provides an alternative communication method for cases where D-Bus is not available. By using the socket API, you can control the TuneD daemon to optimize the performance, and change the values of various tuning parameters. The socket API is disabled by default, you can enable it in the tuned-main.conf file. Bugzilla:2113900 9.5. Networking WireGuard VPN is available as a Technology Preview WireGuard, which Red Hat provides as an unsupported Technology Preview, is a high-performance VPN solution that runs in the Linux kernel. It uses modern cryptography and is easier to configure than other VPN solutions. Additionally, the small code-basis of WireGuard reduces the surface for attacks and, therefore, improves the security. For further details, see Setting up a WireGuard VPN . Bugzilla:1613522 [1] kTLS available as a Technology Preview RHEL provides kernel Transport Layer Security (KTLS) as a Technology Preview. kTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. kTLS also includes the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that provides this functionality. Bugzilla:1570255 [1] The systemd-resolved service is available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, a Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that systemd-resolved is an unsupported Technology Preview. Bugzilla:2020529 The PRP and HSR protocols are now available as a Technology Preview This update adds the hsr kernel module that provides the following protocols: Parallel Redundancy Protocol (PRP) High-availability Seamless Redundancy (HSR) The IEC 62439-3 standard defines these protocols, and you can use this feature to configure zero-loss redundancy in Ethernet networks. Bugzilla:2177256 [1] Offloading IPsec encapsulation to a NIC is now available as a Technology Preview This update adds the IPsec packet offloading capabilities to the kernel. Previously, it was possible to only offload the encryption to a network interface controller (NIC). With this enhancement, the kernel can now offload the entire IPsec encapsulation process to a NIC to reduce the workload. Note that offloading the IPsec encapsulation process to a NIC also reduces the ability of the kernel to monitor and filter such packets. Bugzilla:2178699 [1] Network drivers for modems in RHEL are available as Technology Preview Device manufacturers support Federal Communications Commission (FCC) locking as the default setting. FCC provides a lock to bind WWAN drivers to a specific system where WWAN drivers provide a channel to communicate with modems. Based on the modem PCI ID, manufacturers integrate unlocking tools on Red Hat Enterprise Linux for ModemManager. However, a modem remains unusable if not unlocked previously even if the WWAN driver is compatible and functional. Red Hat Enterprise Linux provides the drivers for the following modems with limited functionality as a Technology Preview: Qualcomm MHI WWAM MBIM - Telit FN990Axx Intel IPC over Shared Memory (IOSM) - Intel XMM 7360 LTE Advanced Mediatek t7xx (WWAN) - Fibocom FM350GL Intel IPC over Shared Memory (IOSM) - Fibocom L860GL modem Jira:RHELDOCS-16760 [1] , Bugzilla:2123542, Jira:RHEL-6564, Bugzilla:2110561, Bugzilla:2222914 Segment Routing over IPv6 (SRv6) is available as a Technology Preview The RHEL kernel provides Segment Routing over IPv6 (SRv6) as a Technology Preview. You can use this functionality to optimize traffic flows in edge computing or to improve network programmability in data centers. However, the most significant use case is the end-to-end (E2E) network slicing in 5G deployment scenarios. In that area, the SRv6 protocol provides you with the programmable custom network slices and resource reservations to address network requirements for specific applications or services. At the same time, the solution can be deployed on a single-purpose appliance, and it satisfies the need for a smaller computational footprint. Bugzilla:2186375 [1] kTLS rebased to version 6.3 The kernel Transport Layer Security (KTLS) functionality is a Technology Preview. With this RHEL release, kTLS has been rebased to the 6.3 upstream version, and notable changes include: Added the support for 256-bit keys with TX device offload Delivered various bugfixes Bugzilla:2183538 [1] Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol that implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which maintains two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 9. Jira:RHELDOCS-19773 [1] 9.6. Kernel The kdump mechanism with a unified kernel image is available as a Technology Preview The kdump mechanism with a kernel image contained in a unified kernel image (UKI) is available as a Technology Preview. UKI is a single executable, combining the initramfs , vmlinuz ,and the kernel command line in a single file. The UKI key benefit being extending the cryptographic signature for SecureBoot to all components at once. For the feature to work, with the kernel command line contained in the UKI, set the crashkernel= parameter with an appropriate value. This reserves the required memory for kdump . Note: Currently the kexec_file_load system call from the Linux kernel cannot load UKI. Therefore, only the kernel image contained in the UKI is used when loading the crash kernel with the kexec_file_load system call. Bugzilla:2169720 [1] SGX available as a Technology Preview Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. The RHEL kernel partially provides the SGX v1 and v1.5 functionality. Version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. Version 2 adds Enclave Dynamic Memory Management (EDMM). Notable features include: Modifying EPCM permissions of regular enclave pages that belong to an initialized enclave. Dynamic addition of regular enclave pages to an initialized enclave. Expanding an initialized enclave to accommodate more threads. Removing regular and TCS pages from an initialized enclave. Bugzilla:1874182 [1] The Intel data streaming accelerator driver for kernel is available as a Technology Preview The Intel data streaming accelerator driver (IDXD) for the kernel is currently available as a Technology Preview. It is an Intel CPU integrated accelerator and includes the shared work queue with process address space ID (pasid) submission and shared virtual memory (SVM). Bugzilla:2030412 The Soft-iWARP driver is available as a Technology Preview Soft-iWARP (siw) is a software, Internet Wide-area RDMA Protocol (iWARP), kernel driver for Linux. Soft-iWARP implements the iWARP protocol suite over the TCP/IP network stack. This protocol suite is fully implemented in software and does not require a specific Remote Direct Memory Access (RDMA) hardware. Soft-iWARP enables a system with a standard Ethernet adapter to connect to an iWARP adapter or to another system with already installed Soft-iWARP. Bugzilla:2023416 [1] SGX available as a Technology Preview Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. The RHEL kernel partially provides the SGX v1 and v1.5 functionality. Version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. Version 2 adds Enclave Dynamic Memory Management (EDMM). Notable features include: Modifying EPCM permissions of regular enclave pages that belong to an initialized enclave. Dynamic addition of regular enclave pages to an initialized enclave. Expanding an initialized enclave to accommodate more threads. Removing regular and TCS pages from an initialized enclave. Bugzilla:1660337 [1] rvu_af , rvu_nicpf , and rvu_nicvf available as Technology Preview The following kernel modules are available as Technology Preview for Marvell OCTEON TX2 Infrastructure Processor family: rvu_nicpf - Marvell OcteonTX2 NIC Physical Function driver rvu_nicvf - Marvell OcteonTX2 NIC Virtual Function driver rvu_nicvf - Marvell OcteonTX2 RVU Admin Function driver Bugzilla:2040643 [1] 9.7. File systems and storage DAX is now available for ext4 and XFS as a Technology Preview In RHEL 9, the DAX file system is available as a Technology Preview. DAX provides means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a DAX compatible file system must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. Bugzilla:1995338 [1] NVMe-oF Discovery Service features available as a Technology Preview The NVMe-oF Discovery Service features, defined in the NVMexpress.org Technical Proposals (TP) 8013 and 8014, are available as a Technology Preview. To preview these features, use the nvme-cli 2.0 package and attach the host to an NVMe-oF target device that implements TP-8013 or TP-8014. For more information about TP-8013 and TP-8014, see the NVM Express 2.0 Ratified TPs from the https://nvmexpress.org/specifications/ website. Bugzilla:2021672 [1] nvme-stas package available as a Technology Preview The nvme-stas package, which is a Central Discovery Controller (CDC) client for Linux, is now available as a Technology Preview. It handles Asynchronous Event Notifications (AEN), Automated NVMe subsystem connection controls, Error handling and reporting, and Automatic ( zeroconf ) and Manual configuration. This package consists of two daemons, Storage Appliance Finder ( stafd ) and Storage Appliance Connector ( stacd ). Bugzilla:1893841 [1] NVMe TP 8006 in-band authentication available as a Technology Preview Implementing Non-Volatile Memory Express (NVMe) TP 8006, which is an in-band authentication for NVMe over Fabrics (NVMe-oF) is now available as an unsupported Technology Preview. The NVMe Technical Proposal 8006 defines the DH-HMAC-CHAP in-band authentication protocol for NVMe-oF, which is provided with this enhancement. For more information, see the dhchap-secret and dhchap-ctrl-secret option descriptions in the nvme-connect(1) man page. Bugzilla:2027304 [1] The io_uring interface is available as a Technology Preview io_uring is a new and effective asynchronous I/O interface, which is now available as a Technology Preview. By default, this feature is disabled. You can enable this interface by setting the kernel.io_uring_disabled sysctl variable to any one of the following values: 0 All processes can create io_uring instances as usual. 1 io_uring creation is disabled for unprivileged processes. The io_uring_setup fails with the -EPERM error unless the calling process is privileged by the CAP_SYS_ADMIN capability. Existing io_uring instances can still be used. 2 io_uring creation is disabled for all processes. The io_uring_setup always fails with -EPERM . Existing io_uring instances can still be used. This is the default setting. An updated version of the SELinux policy to enable the mmap system call on anonymous inodes is also required to use this feature. By using the io_uring command pass-through, an application can issue commands directly to the underlying hardware, such as nvme . Use of io_uring command pass-through currently requires a custom SELinux policy module. Create a custom SELinux policy module: Save the following lines as io_uring_cmd_passthrough.cil file: Load the policy module: Bugzilla:2068237 [1] 9.8. Compilers and development tools jmc-core and owasp-java-encoder available as a Technology Preview RHEL 9 is distributed with the jmc-core and owasp-java-encoder packages as Technology Preview features for the AMD and Intel 64-bit architectures. jmc-core is a library providing core APIs for Java Development Kit (JDK) Mission Control, including libraries for parsing and writing JDK Flight Recording files, and libraries for Java Virtual Machine (JVM) discovery through Java Discovery Protocol (JDP). The owasp-java-encoder package provides a collection of high-performance low-overhead contextual encoders for Java. Note that since RHEL 9.2, jmc-core and owasp-java-encoder are available in the CodeReady Linux Builder (CRB) repository, which you must explicitly enable. See How to enable and make use of content within CodeReady Linux Builder for more information. Bugzilla:1980981 9.9. Identity Management DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now implement DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2 Secure Domain Name System (DNS) Deployment Guide DNSSEC Key Rollover Timing Considerations Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. Bugzilla:2084180 Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as a Technology Preview. Previously, the IdM API was enhanced to enable multiple versions of API commands. These enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers can use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . Bugzilla:2084166 sssd-idp sub-package available as a Technology Preview The sssd-idp sub-package for SSSD contains the oidc_child and krb5 idp plugins, which are client-side components that perform OAuth2 authentication against Identity Management (IdM) servers. This feature is available only with IdM servers on RHEL 9.1 and later. Bugzilla:2065693 SSSD internal krb5 idp plugin available as a Technology Preview The SSSD krb5 idp plugin allows you to authenticate against an external identity provider (IdP) using the OAuth2 protocol. This feature is available only with IdM servers on RHEL 9.1 and later. Bugzilla:2056482 RHEL IdM allows delegating user authentication to external identity providers as a Technology Preview In RHEL IdM, you can now associate users with external identity providers (IdP) that support the OAuth 2 device authorization flow. When these users authenticate with the SSSD version available in RHEL 9.1 or later, they receive RHEL IdM single sign-on capabilities with Kerberos tickets after performing authentication and authorization at the external IdP. Notable features include: Adding, modifying, and deleting references to external IdPs with ipa idp-* commands Enabling IdP authentication for users with the ipa user-mod --user-auth-type=idp command For additional information, see Using external identity providers to authenticate to IdM . Bugzilla:2069202 ACME supports automatically removing expired certificates as a Technology Preview The Automated Certificate Management Environment (ACME) service in Identity Management (IdM) adds an automatic mechanism to purge expired certificates from the certificate authority (CA) as a Technology Preview. As a result, ACME can now automatically remove expired certificates at specified intervals. Removing expired certificates is disabled by default. To enable it, enter: With this enhancement, ACME can now automatically remove expired certificates at specified intervals. Removing expired certificates is disabled by default. To enable it, enter: This removes expired certificates on the first day of every month at midnight. Note Expired certificates are removed after their retention period. By default, this is 30 days after expiry. For more details, see the ipa-acme-manage(1) man page. Jira:RHELPLAN-145900 9.10. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is available for the 64-bit ARM architecture as a Technology Preview. You can now connect to the desktop session on a 64-bit ARM server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on 64-bit ARM. For example: The Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27394 [1] GNOME for the IBM Z architecture available as a Technology Preview The GNOME desktop environment is available for the IBM Z architecture as a Technology Preview. You can now connect to the desktop session on an IBM Z server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on IBM Z. For example: The Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27737 [1] 9.11. Virtualization Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on Intel, AMD64, and IBM Z hosts with RHEL 9. With this feature, a RHEL 7, RHEL 8, or RHEL 9 VM that runs on a physical RHEL 9 host can act as a hypervisor, and host its own VMs. Jira:RHELDOCS-17040 [1] AMD SEV and SEV-ES for KVM virtual machines As a Technology Preview, RHEL 9 provides the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts the VM's memory to protect the VM from access by the host. This increases the security of the VM. In addition, the enhanced Encrypted State version of SEV (SEV-ES) is also provided as Technology Preview. SEV-ES encrypts all CPU register contents when a VM stops running. This prevents the host from modifying the VM's CPU registers or reading any information from them. Note that SEV and SEV-ES work only on the 2nd generation of AMD EPYC CPUs (codenamed Rome) or later. Also note that RHEL 9 includes SEV and SEV-ES encryption, but not the SEV and SEV-ES security attestation. Jira:RHELPLAN-65217 [1] Virtualization is now available on ARM 64 As a Technology Preview, it is now possible to create KVM virtual machines on systems using ARM 64 CPUs. Jira:RHELPLAN-103993 [1] virtio-mem is now available on AMD64, Intel 64, and ARM 64 As a Technology Preview, RHEL 9 introduces the virtio-mem feature on AMD64, Intel 64, and ARM 64 systems. Using virtio-mem makes it possible to dynamically add or remove host memory in virtual machines (VMs). To use virtio-mem , define virtio-mem memory devices in the XML configuration of a VM and use the virsh update-memory-device command to request memory device size changes while the VM is running. To see the current memory size exposed by such memory devices to a running VM, view the XML configuration of the VM. Note, however, that virtio-mem currently does not work on VMs that use a Windows operating system. Bugzilla:2014487 , Bugzilla:2044162 , Bugzilla:2044172 Intel TDX in RHEL guests As a Technology Preview, the Intel Trust Domain Extension (TDX) feature can now be used in RHEL 9.2 and later guest operating systems. If the host system supports TDX, you can deploy hardware-isolated RHEL 9 virtual machines (VMs), called trust domains (TDs). Note, however, that TDX currently does not work with kdump , and enabling TDX will cause kdump to fail on the VM. Bugzilla:1955275 [1] A unified kernel image of RHEL is now available as a Technology Preview As a Technology Preview, you can now obtain the RHEL kernel as a unified kernel image (UKI) for virtual machines (VMs). A unified kernel image combines the kernel, initramfs, and kernel command line into a single signed binary file. UKIs can be used in virtualized and cloud environments, especially in confidential VMs where strong SecureBoot capabilities are required. The UKI is available as a kernel-uki-virt package in RHEL 9 repositories. Currently, the RHEL UKI can only be used in a UEFI boot configuration. Bugzilla:2142102 [1] Intel vGPU available as a Technology Preview As a Technology Preview, it is possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that this feature is deprecated and was removed entirely with the RHEL 9.3 release. Jira:RHELDOCS-17050 [1] 9.12. RHEL in cloud environments RHEL is now available on Azure confidential VMs as a Technology Preview With the updated RHEL kernel, you can now create and run RHEL confidential virtual machines (VMs) on Microsoft Azure as a Technology Preview. The newly added unified kernel image (UKI) now enables booting encrypted confidential VM images on Azure. The UKI is available as a kernel-uki-virt package in RHEL 9 repositories. Currently, the RHEL UKI can only be used in a UEFI boot configuration. Jira:RHELPLAN-139800 [1] 9.13. Containers SQLite database backend for Podman is available as a Technology Preview Beginning with Podman v4.6, the SQLite database backend for Podman is available as a Technology Preview. To set the database backend to SQLite, add the database_backend = "sqlite" option in the /etc/containers/containers.conf configuration file. Run the podman system reset command to reset storage back to the initial state before you switch to the SQLite database backend. Note that you have to re-create all containers and pods. The SQLite database guarantees good stability and consistency. Other databases in the containers stack will be moved to SQLite as well. The BoltDB remains the default database backend. Jira:RHELPLAN-154429 [1] The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. Jira:RHELDOCS-16861 [1] | [
"[global] ktls = true",
"---cut here--- ( allow unconfined_domain_type device_node ( io_uring ( cmd ))) ( allow unconfined_domain_type file_type ( io_uring ( cmd ))) ---cut here---",
"semodule -i io_uring_cmd_passthrough.cil",
"ipa-acme-manage pruning --enable --cron \"0 0 1 * *\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.3_release_notes/technology-previews |
Chapter 73. token | Chapter 73. token This chapter describes the commands under the token command. 73.1. token issue Issue new token Usage: Table 73.1. Command arguments Value Summary -h, --help Show this help message and exit Table 73.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 73.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 73.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 73.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 73.2. token revoke Revoke existing token Usage: Table 73.6. Positional arguments Value Summary <token> Token to be deleted Table 73.7. Command arguments Value Summary -h, --help Show this help message and exit | [
"openstack token issue [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]",
"openstack token revoke [-h] <token>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/token |
15.3. Managing Users and Groups for a CA, OCSP, KRA, or TKS | 15.3. Managing Users and Groups for a CA, OCSP, KRA, or TKS Many of the operations that users can perform are dictated by the groups that they belong to; for instance, agents for the CA manage certificates and profiles, while administrators manage CA server configuration. Four subsystems - the CA, OCSP, KRA, and TKS - use the Java administrative console to manage groups and users. The TPS has web-based admin services, and users and groups are configured through its web service page. 15.3.1. Managing Groups Note pkiconsole is being deprecated. 15.3.1.1. Creating a New Group Log into the administrative console. Select Users and Groups from the navigation menu on the left. Select the Groups tab. Click Edit , and fill in the group information. It is only possible to add users who already exist in the internal database. Edit the ACLs to grant the group privileges. See Section 15.5.4, "Editing ACLs" for more information. If no ACIs are added to the ACLs for the group, the group will have no access permissions to any part of Certificate System. 15.3.1.2. Changing Members in a Group Members can be added or deleted from all groups. The group for administrators must have at least one user entry. Log into the administrative console. Select Users and Groups from the navigation tree on the left. Click the Groups tab. Select the group from the list of names, and click Edit . Make the appropriate changes. To change the group description, type a new description in the Group description field. To remove a user from the group, select the user, and click Delete . To add users, click Add User . Select the users to add from the dialog box, and click OK . 15.3.2. Managing Users (Administrators, Agents, and Auditors) The users for each subsystem are maintained separately. Just because a person is an administrator in one subsystem does not mean that person has any rights (or even a user entry) for another subsystem. Users can be configured and, with their user certificates, trusted as agents, administrators, or auditors for a subsystem. 15.3.2.1. Creating Users After you installed Certificate System, only the user created during the setup exists. This section describes how to create additional users. Note For security reasons, create individual accounts for Certificate System users. 15.3.2.1.1. Creating Users Using the Command Line To create a user using the command line: Add a user account. For example, to add the example user to the CA: This command uses the caadmin user to add a new account. Optionally, add a user to a group. For example, to add the example user to the Certificate Manager Agents group: Create a certificate request: If a Key Recovery Authority (KRA) exists in your Certificate System environment: This command stores the Certificate Signing Request (CSR) in the CRMF format in the ~/user_name.req file. If no Key Recovery Authority (KRA) exists in your Certificate System environment: Create a NSS database directory: Store the CSR in a PKCS-#10 formatted file specified by the -o option, -d for the path to an initialized NSS database directory, -P option for a password file, -p for a password, and -n for a subject DN: Create an enrollment request: Create the ~/cmc.role_crmf.cfg file with the following content: Set the parameters based on your environment and the CSR format used in the step. Pass the previously created configuration file to the CMCRequest utility to create the CMC request: Submit a Certificate Management over CMS (CMC) request: Create the ~/HttpClient_role_crmf.cfg file with the following content: Set the parameters based on your environment. Submit the request to the CA: Verify the result: Optionally, to import the certificate as the user to its own ~/.dogtag/pki-instance_name/ database: Add the certificate to the user record: List certificates issued for the user to discover the certificate's serial number. For example, to list certificates that contain the example user name in the certificate's subject: The serial number of the certificate is required in the step. Add the certificate using its serial number from the certificate repository to the user account in the Certificate System database. For example, for a CA user: 15.3.2.1.2. Creating Users Using the Console Note pkiconsole is being deprecated. To create a user using the PKI Console: Log into the administrative console. In the Configuration tab, select Users and Groups . Click Add . Fill in the information in the Edit User Information dialog. Most of the information is standard user information, such as the user's name, email address, and password. This window also contains a field called User State , which can contain any string, which is used to add additional information about the user; most basically, this field can show whether this is an active user. Select the group to which the user will belong. The user's group membership determines what privileges the user has. Assign agents, administrators, and auditors to the appropriate subsystem group. Store the user's certificate. Request a user certificate through the CA end-entities service page. If auto-enrollment is not configured for the user profile, then approve the certificate request. Retrieve the certificate using the URL provided in the notification email, and copy the base-64 encoded certificate to a local file or to the clipboard. Select the new user entry, and click Certificates . Click Import , and paste in the base-64 encoded certificate. 15.3.2.2. Changing a Certificate System User's Certificate Log into the administrative console. Select Users and Groups . Select the user to edit from the list of user IDs, and click Certificates . Click Import to add the new certificate. In the Import Certificate window, paste the new certificate in the text area. Include the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- marker lines. 15.3.2.3. Renewing Administrator, Agent, and Auditor User Certificates There are two methods of renewing a certificate. Regenerating the certificate takes its original key and its original profile and request, and recreates an identical key with a new validity period and expiration date. Re-keying a certificate resubmits the initial certificate request to the original profile, but generates a new key pair. Administrator certificates can be renewed by being re-keyed. Each subsystem has a bootstrap user that was created at the time the subsystem was created. A new certificate can be requested for this user before their original one expires, using one of the default renewal profiles. Certificates for administrative users can be renewed directly in the end user enrollment forms, using the serial number of the original certificate. Renew the admin user certificates in the CA's end users forms, as described in Section 5.4.1.1.2, "Certificate-Based Renewal" . This must be the same CA as first issued the certificate (or a clone of it). Agent certificates can be renewed by using the certificate-based renewal form in the end entities page. Self-renew user SSL client certificate . This form recognizes and updates the certificate stored in the browser's certificate store directly. Note It is also possible to renew the certificate using certutil , as described in Section 17.3.3, "Renewing Certificates Using certutil" . Rather than using the certificate stored in a browser to initiate renewal, certutil uses an input file with the original key. Add the renewed user certificate to the user entry in the internal LDAP database. Open the console for the subsystem. Configuration | Users and Groups | Users | admin | Certificates | Import In the Configuration tab, select Users and Groups . In the Users tab, double-click the user entry with the renewed certificate, and click Certificates . Click Import , and paste in the base-64 encoded certificate. Note pkiconsole is being deprecated. This can also be done by using ldapmodify to add the renewed certification directly to the user entry in the internal LDAP database, by replacing the userCertificate attribute in the user entry, such as uid=admin,ou=people,dc= subsystem-base-DN . 15.3.2.4. Renewing an Expired Administrator, Agent, and Auditor User Certificate When a valid user certificate has already expired, you can no longer use the web service page nor the pki command-line tool requiring authentication. In such a scenario, you can use the pki-server cert-fix command to renew an expired certificate. Before you proceed, make sure: You have a valid CA certificate. You have root privileges. Procedure 15.1. Renewing an Expired Administrator, Agent, and Auditor User Certificate Disable self test. Either run the following command: Or remove the following line from CA's CS.cfg file and restart the CA subsystem: Check the expired certificates in the client's NSS database and find the certificate's serial number (certificate ID). List the user certificates: Get the expired certificate serial number, which you want to renew: Renew the certificate. The local LDAP server requires the LDAP Directory Manager's password. Re-eanable self test. Either run the following command: Or add the following line to CA's CS.cfg file and restart the CA subsystem: To verify that you have succeeded in the certificate renewal, you can display sufficient information about the certificate by running: To see full details of the specific certificate including attributes, extensions, public key modulus, hashes, and more, you can also run: 15.3.2.5. Deleting a Certificate System User Users can be deleted from the internal database. Deleting a user from the internal database deletes that user from all groups to which the user belongs. To remove the user from specific groups, modify the group membership. Delete a privileged user from the internal database by doing the following: Log into the administrative console. Select Users and Groups from the navigation menu on the left. Select the user from the list of user IDs, and click Delete . Confirm the delete when prompted. | [
"pkiconsole https://server.example.com:8443/ subsystem_type",
"pki -d ~/.dogtag/pki-instance_name/ca/alias/ -c password -n caadmin ca -user-add example --fullName \" Example User \" --------------------- Added user \"example\" --------------------- User ID: example Full name: Example User",
"pki -d ~/.dogtag/pki-instance_name/ -p password -n \" caadmin \" user-add-membership example Certificate Manager Agents",
"CRMFPopClient -d ~/.dogtag/pki-instance_name/ -p password -n \" user_name \" -q POP_SUCCESS -b kra.transport -w \"AES/CBC/PKCS5Padding\" -v -o ~/user_name.req",
"export pkiinstance=ca1 # echo USD{pkiinstance} # export agentdir=~/.dogtag/USD{pkiinstance}/agent1.dir # echo USD{agentdir} # pki -d USD{agentdir}/ -C USD{ somepwdfile } client-init",
"PKCS10Client -d USD{agentdir}/ -P USD{ somepwdfile } -n \"cn=agent1,uid=agent1\" -o USD{agentdir}/agent1.csr PKCS10Client: Certificate request written into /.dogtag/ca1/agent1.dir/agent1.csr PKCS10Client: PKCS#10 request key id written into /.dogtag/ca1/agent1.dir/agent1.csr.keyId",
"#numRequests: Total number of PKCS10 requests or CRMF requests. numRequests=1 #input: full path for the PKCS10 request or CRMF request, #the content must be in Base-64 encoded format #Multiple files are supported. They must be separated by space. input= ~/user_name.req #output: full path for the CMC request in binary format output= ~/cmc.role_crmf.req #tokenname: name of token where agent signing cert can be found (default is internal) tokenname=internal #nickname: nickname for agent certificate which will be used #to sign the CMC full request. nickname= PKI Administrator for Example.com #dbdir: directory for cert9.db, key4.db and pkcs11.txt dbdir= ~/.dogtag/pki-instance_name/ #password: password for cert9.db which stores the agent #certificate password= password #format: request format, either pkcs10 or crmf format= crmf",
"CMCRequest ~/cmc.role_crmf.cfg",
"#host: host name for the http server host= server.example.com #port: port number port= 8443 #secure: true for secure connection, false for nonsecure connection secure=true #input: full path for the enrollment request, the content must be in binary format input= ~/cmc.role_crmf.req #output: full path for the response in binary format output= ~/cmc.role_crmf.resp #tokenname: name of token where SSL client authentication cert can be found (default is internal) #This parameter will be ignored if secure=false tokenname=internal #dbdir: directory for cert9.db, key4.db and pkcs11.txt #This parameter will be ignored if secure=false dbdir= ~/.dogtag/pki-instance_name/ #clientmode: true for client authentication, false for no client authentication #This parameter will be ignored if secure=false clientmode=true #password: password for cert9.db #This parameter will be ignored if secure=false and clientauth=false password= password #nickname: nickname for client certificate #This parameter will be ignored if clientmode=false nickname= PKI Administrator for Example.com #servlet: servlet name servlet=/ca/ee/ca/profileSubmitCMCFull",
"HttpClient ~/HttpClient_role_crmf.cfg Total number of bytes read = 3776 after SSLSocket created, thread token is Internal Key Storage Token client cert is not null handshake happened writing to socket Total number of bytes read = 2523 MIIJ1wYJKoZIhvcNAQcCoIIJyDCCCcQCAQMxDzANBglghkgBZQMEAgEFADAxBggr The response in data format is stored in ~/cmc.role_crmf.resp",
"CMCResponse ~/cmc.role_crmf.resp Certificates: Certificate: Data: Version: v3 Serial Number: 0xE Signature Algorithm: SHA256withRSA - 1.2.840.113549.1.1.11 Issuer: CN=CA Signing Certificate,OU=pki- instance_name Security Domain Validity: Not Before: Friday, July 21, 2017 12:06:50 PM PDT America/Los_Angeles Not After: Wednesday, January 17, 2018 12:06:50 PM PST America/Los_Angeles Subject: CN= user_name Number of controls is 1 Control #0: CMCStatusInfoV2 OID: {1 3 6 1 5 5 7 7 25} BodyList: 1 Status: SUCCESS",
"certutil -d ~/.dogtag/pki-instance_name/ -A -t \"u,u,u\" -n \" user_name certificate \" -i ~/cmc.role_crmf.resp",
"pki -d ~/.dogtag/pki-instance_name/ -c password -n caadmin ca-user-cert-find example ----------------- 1 entries matched ----------------- Cert ID: 2;6;CN=CA Signing Certificate,O=EXAMPLE;CN=PKI Administrator,E= example @example.com,O=EXAMPLE Version: 2 Serial Number: 0x6 Issuer: CN=CA Signing Certificate,O=EXAMPLE Subject: CN=PKI Administrator,E= example @example.com,O=EXAMPLE ---------------------------- Number of entries returned 1",
"pki -c password -n caadmin ca -user-cert-add example --serial 0x6",
"pkiconsole https://server.example.com:8443/ subsystem_type",
"pkiconsole https://server.example.com: admin_port/subsystem_type",
"pki-server selftest-disable -i PKI_instance",
"selftests.container.order.startup=CAPresence:critical, SystemCertsVerification:critical",
"certutil -L -d /root/nssdb/",
"certutil -L -d /root/nssdb/ -n Expired_cert | grep Serial Serial Number: 16 (0x10)",
"pki-server cert-fix --ldap-url ldap:// host 389 --agent-uid caadmin -i PKI_instance -p PKI_https_port --extra-cert 16",
"pki-server selftest-enable -i PKI_instance",
"selftests.container.order.startup=CAPresence:critical, SystemCertsVerification:critical",
"pki ca-cert-find",
"pki ca-cert-show 16 --pretty"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/creating_a_new_group |
function::regparm | function::regparm Name function::regparm - Specify regparm value used to compile function Synopsis Arguments n original regparm value Description Call this function with argument n before accessing function arguments using the *_arg function is the function was build with the gcc -mregparm=n option. (The i386 kernel is built with \-mregparm=3, so systemtap considers regparm(3) the default for kernel functions on that architecture.) Only valid on i386 and x86_64 (when probing 32bit applications). Produces an error on other architectures. | [
"regparm(n:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-regparm |
Chapter 2. Automating Network Intrusion Detection and Prevention Systems (IDPS) with Ansible | Chapter 2. Automating Network Intrusion Detection and Prevention Systems (IDPS) with Ansible You can use Ansible to automate your Intrusion Detection and Prevention System (IDPS). For the purpose of this guide, we use Snort as the IDPS. Use Ansible automation hub to consume content collections, such as tasks, roles, and modules to create automated workflows. 2.1. Requirements and prerequisites Before you begin automating your IDPS with Ansible, ensure that you have the proper installations and configurations necessary to successfully manage your IDPS. You have installed Ansible 2.9 or later. SSH connection and keys are configured. IDPS software (Snort) is installed and configured. You have access to the IDPS server (Snort) to enforce new policies. 2.1.1. Verifying your IDPS installation To verify that Snort has been configured successfully, call it via sudo and ask for the version: USD sudo snort --version ,,_ -*> Snort! <*- o" )~ Version 2.9.13 GRE (Build 15013) "" By Martin Roesch & The Snort Team: http://www.snort.org/contact#team Copyright (C) 2014-2019 Cisco and/or its affiliates. | [
"sudo snort --version ,,_ -*> Snort! <*- o\" )~ Version 2.9.13 GRE (Build 15013) \"\" By Martin Roesch & The Snort Team: http://www.snort.org/contact#team Copyright (C) 2014-2019 Cisco and/or its affiliates. All rights reserved. Copyright (C) 1998-2013 Sourcefire, Inc., et al. Using libpcap version 1.5.3 Using PCRE version: 8.32 2012-11-30 Using ZLIB version: 1.2.7",
"sudo systemctl status snort ● snort.service - Snort service Loaded: loaded (/etc/systemd/system/snort.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-08-26 17:06:10 UTC; 1s ago Main PID: 17217 (snort) CGroup: /system.slice/snort.service └─17217 /usr/sbin/snort -u root -g root -c /etc/snort/snort.conf -i eth0 -p -R 1 --pid-path=/var/run/snort --no-interface-pidfile --nolock-pidfile [...]",
"ansible-galaxy install ansible_security.ids_rule",
"- name: Add Snort rule hosts: snort",
"- name: Add Snort rule hosts: snort become: true",
"- name: Add Snort rule hosts: snort become: true vars: ids_provider: snort",
"- name: Add Snort rule hosts: snort become: true vars: ids_provider: snort tasks: - name: Add snort password attack rule include_role: name: \"ansible_security.ids_rule\" vars: ids_rule: 'alert tcp any any -> any any (msg:\"Attempted /etc/passwd Attack\"; uricontent:\"/etc/passwd\"; classtype:attempted-user; sid:99000004; priority:1; rev:1;)' ids_rules_file: '/etc/snort/rules/local.rules' ids_rule_state: present",
"ansible-navigator run add_snort_rule.ym --mode stdout"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_security_automation_guide/assembly-idps_ansible-security |
Chapter 2. Bug fixes | Chapter 2. Bug fixes This section describes bugs fixed in OpenShift sandboxed containers 1.8. Pod VM image is now deleted when deleting KataConfig on Azure Previously, when you deleted the KataConfig custom resource, the pod VM image might not have been deleted. A workaround required using the Azure CLI to check the pod VM gallery for the image and delete it manually, if necessary. This issue is now resolved in the 1.8.1 release of OpenShift sandboxed containers. Jira:KATA-3462 | null | https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/release_notes/bug-fixes |
18.3. Using IPTables | 18.3. Using IPTables The first step in using iptables is to start the iptables service. Use the following command to start the iptables service: Note The ip6tables service can be turned off if you intend to use the iptables service only. If you deactivate the ip6tables service, remember to deactivate the IPv6 network also. Never leave a network device active without the matching firewall. To force iptables to start by default when the system is booted, use the following command: This forces iptables to start whenever the system is booted into runlevel 3, 4, or 5. 18.3.1. IPTables Command Syntax The following sample iptables command illustrates the basic command syntax: The -A option specifies that the rule be appended to <chain> . Each chain is comprised of one or more rules , and is therefore also known as a ruleset . The three built-in chains are INPUT, OUTPUT, and FORWARD. These chains are permanent and cannot be deleted. The chain specifies the point at which a packet is manipulated. The -j <target> option specifies the target of the rule; i.e., what to do if the packet matches the rule. Examples of built-in targets are ACCEPT, DROP, and REJECT. Refer to the iptables man page for more information on the available chains, options, and targets. | [
"service iptables start",
"chkconfig --level 345 iptables on",
"iptables -A <chain> -j <target>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-fireall-ipt-act |
Chapter 3. Configuring OpenStack to use Ceph block devices | Chapter 3. Configuring OpenStack to use Ceph block devices As a storage administrator, you must configure the Red Hat OpenStack Platform to use the Ceph block devices. The Red Hat OpenStack Platform can use Ceph block devices for Cinder, Cinder Backup, Glance, and Nova. Prerequisites A new or existing Red Hat Ceph Storage cluster. A running Red Hat OpenStack Platform environment. 3.1. Configuring Cinder to use Ceph block devices The Red Hat OpenStack Platform can use Ceph block devices to provide back-end storage for Cinder volumes. Prerequisites Root-level access to the Cinder node. A Ceph volume pool. The user and UUID of the secret to interact with Ceph block devices. Procedure Edit the Cinder configuration file: In the [DEFAULT] section, enable Ceph as a backend for Cinder: Ensure that the Glance API version is set to 2. If you are configuring multiple cinder back ends in enabled_backends , the glance_api_version = 2 setting must be in the [DEFAULT] section and not the [ceph] section. Create a [ceph] section in the cinder.conf file. Add the Ceph settings in the following steps under the [ceph] section. Specify the volume_driver setting and set it to use the Ceph block device driver: Specify the cluster name and Ceph configuration file location. In typical deployments the Ceph cluster has a cluster name of ceph and a Ceph configuration file at /etc/ceph/ceph.conf . If the Ceph cluster name is not ceph , specify the cluster name and configuration file path appropriately: By default, Red Hat OpenStack Platform stores Ceph volumes in the rbd pool. To use the volumes pool created earlier, specify the rbd_pool setting and set the volumes pool: Red Hat OpenStack Platform does not have a default user name or a UUID of the secret for volumes. Specify rbd_user and set it to the cinder user. Then, specify the rbd_secret_uuid setting and set it to the generated UUID stored in the uuid-secret.txt file: Specify the following settings: When you configure Cinder to use Ceph block devices, the configuration file might look similar to this: Example Note Consider removing the default [lvm] section and its settings. 3.2. Configuring Cinder backup to use Ceph block devices The Red Hat OpenStack Platform can configure Cinder backup to use Ceph block devices. Prerequisites Root-level access to the Cinder node. Procedure Edit the Cinder configuration file: Go to the [ceph] section of the configuration file. Specify the backup_driver setting and set it to the Ceph driver: Specify the backup_ceph_conf setting and specify the path to the Ceph configuration file: Note The Cinder backup Ceph configuration file may be different from the Ceph configuration file used for Cinder. For example, it can point to a different Ceph storage cluster. Specify the Ceph pool for backups: Note The Ceph configuration file used for Cinder backup might be different from the Ceph configuration file used for Cinder. Specify the backup_ceph_user setting and specify the user as cinder-backup : Specify the following settings: When you include the Cinder options, the [ceph] section of the cinder.conf file might look similar to this: Example Verify if Cinder backup is enabled: If enable_backup is set to False , then edit the local_settings file and set it to True . Example 3.3. Configuring Glance to use Ceph block devices The Red Hat OpenStack Platform can configure Glance to use Ceph block devices. Prerequisites Root-level access to the Glance node. Procedure To use Ceph block devices by default, edit the /etc/glance/glance-api.conf file. If you used different pool, user or Ceph configuration file settings apply the appropriate values. Uncomment the following settings if necessary and change their values accordingly: To enable copy-on-write (CoW) cloning set show_image_direct_url to True . Important Enabling CoW exposes the back end location via Glance's API, so the endpoint should not be publicly accessible. Disable cache management if necessary. The flavor should be set to keystone only, not keystone+cachemanagement . Red Hat recommends the following properties for images: The virtio-scsi controller gets better performance and provides support for discard operations. For systems using SCSI/SAS drives, connect every Cinder block device to that controller. Also, enable the QEMU guest agent and send fs-freeze/thaw calls through the QEMU guest agent. 3.4. Configuring Nova to use Ceph block devices The Red Hat OpenStack Platform can configure Nova to use Ceph block devices. You must configure each Nova node to use ephemeral back-end storage devices, which allows all virtual machines to use the Ceph block devices. Prerequisites Root-level access to the Nova nodes. Procedure Edit the Ceph configuration file: Add the following section to the [client] section of the Ceph configuration file: Create new directories for the admin socket and log file, and change the directory permissions to use the qemu user and libvirtd group: Note The directories must be allowed by SELinux or AppArmor. On each Nova node, edit the /etc/nova/nova.conf file. Under the [libvirt] section, configure the following settings: Example Replace the UUID in rbd_user_secret with the UUID in the uuid-secret.txt file. 3.5. Restarting the OpenStack services Restarting the Red Hat OpenStack Platform services enables you to activate the Ceph block device drivers. Prerequisites Root-level access to the Red Hat OpenStack Platform nodes. Procedure Load the block device pool names and Ceph user names into the configuration file. Restart the appropriate OpenStack services after modifying the corresponding configuration files: | [
"vim /etc/cinder/cinder.conf",
"enabled_backends = ceph",
"glance_api_version = 2",
"volume_driver = cinder.volume.drivers.rbd.RBDDriver",
"rbd_cluster_name = us-west rbd_ceph_conf = /etc/ceph/us-west.conf",
"rbd_pool = volumes",
"rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964",
"rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1",
"[DEFAULT] enabled_backends = ceph glance_api_version = 2 ... [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1",
"vim /etc/cinder/cinder.conf",
"backup_driver = cinder.backup.drivers.ceph",
"backup_ceph_conf = /etc/ceph/ceph.conf",
"backup_ceph_pool = backups",
"backup_ceph_user = cinder-backup",
"backup_ceph_chunk_size = 134217728 backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true",
"[ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_cluster_name = ceph rbd_pool = volumes rbd_user = cinder rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 backup_driver = cinder.backup.drivers.ceph backup_ceph_user = cinder-backup backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true",
"cat /etc/openstack-dashboard/local_settings | grep enable_backup",
"OPENSTACK_CINDER_FEATURES = { 'enable_backup': True, }",
"vim /etc/glance/glance-api.conf",
"stores = rbd default_store = rbd rbd_store_chunk_size = 8 rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf",
"show_image_direct_url = True",
"flavor = keystone",
"hw_scsi_model=virtio-scsi hw_disk_bus=scsi hw_qemu_guest_agent=yes os_require_quiesce=yes",
"vim /etc/ceph/ceph.conf",
"[client] rbd cache = true rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/guests/USDcluster-USDtype.USDid.USDpid.USDcctid.asok log file = /var/log/ceph/qemu-guest-USDpid.log",
"mkdir -p /var/run/ceph/guests/ /var/log/ceph/ chown qemu:libvirt /var/run/ceph/guests /var/log/ceph/",
"[libvirt] images_type = rbd images_rbd_pool = vms images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder rbd_secret_uuid = 4b5fd580-360c-4f8c-abb5-c83bb9a3f964 disk_cachemodes=\"network=writeback\" inject_password = false inject_key = false inject_partition = -2 live_migration_flag=\"VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED\" hw_disk_discard = unmap",
"systemctl restart openstack-cinder-volume systemctl restart openstack-cinder-backup systemctl restart openstack-glance-api systemctl restart openstack-nova-compute"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/block_device_to_openstack_guide/configuring-openstack-to-use-ceph-block-devices |
Chapter 54. Starting dynamic tasks and processes | Chapter 54. Starting dynamic tasks and processes You can add dynamic tasks and processes to a case during run time. Dynamic actions are a way to address changing situations, where an unanticipated change during the case requires a new task or process to be incorporated into the case. Use a case application to add a dynamic task during run time. For demonstration purposes, the Business Central distribution includes a Showcase application where you can start a new dynamic task or process for the IT Orders application. Prerequisites KIE Server is deployed and connected to Business Central. The IT Orders project is deployed to KIE Server. The Showcase application .war file has been deployed alongside Business Central. Procedure With the IT_Orders_New project deployed and running in KIE Server, in a web browser, navigate to the Showcase login page http://localhost:8080/rhpam-case-mgmt-showcase/ . Alternatively, if you have configured Business Central to display the Apps launcher button, use it to open a new browser window with the Showcase login page. Log in to the Showcase application using your Business Central login credentials. Select an active case instance from the list to open it. Under Overview Actions Available , click the button to New user task or New process task to add a new task or process task. Figure 54.1. Showcase dynamic actions To create a dynamic user task, start a New user task and complete the required information: To create a dynamic process task, start a new process task and complete the required information: To view a dynamic user task in Business Central, click Menu Track Task Inbox . The user task that was added dynamically using the Showcase application appears in the Task Inbox of users assigned to the task during task creation. Click the dynamic task in the Task Inbox to open the task. A number of action tabs are available from this page. Using the actions available under the task tabs, you can begin working on the task. In the Showcase application, click the refresh button in the upper-right corner. Case tasks and processes that are in progress appear under Overview Actions In progress . When you have completed working on the task, click the Complete button under the Work tab. In the Showcase application, click the refresh button in the upper-right corner. The completed task appears under Overview Actions Completed . To view a dynamic process task in Business Central, click Menu Manage Process Instances . Click the dynamic process instance in the list of available process instances to view information about the process instance. In the Showcase application, click the refresh button in the upper-right corner. Case tasks and processes that are in progress appear under Overview Actions In progress . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/case-management-dynamic-tasks-proc |
C.17. RollingUpgradeManager | C.17. RollingUpgradeManager org.infinispan.upgrade.RollingUpgradeManager The RollingUpgradeManager component handles the control hooks in order to migrate data from one version of Red Hat JBoss Data Grid to another. Table C.27. Operations Name Description Signature disconnectSource Disconnects the target cluster from the source cluster according to the specified migrator. void disconnectSource(String p0) recordKnownGlobalKeyset Dumps the global known keyset to a well-known key for retrieval by the upgrade process. void recordKnownGlobalKeyset() synchronizeData Synchronizes data from the old cluster to this using the specified migrator. long synchronizeData(String p0) Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/RollingUpgradeManager |
Chapter 11. GNOME Shell Extensions | Chapter 11. GNOME Shell Extensions This chapter introduces system-wide configuration of GNOME Shell Extensions. You will learn how to view the extensions, how to enable them, how to lock a list of enabled extensions or how to set up several extensions as mandatory for the users of the system. You will be using dconf when configuring GNOME Shell Extensions, setting the following two GSettings keys: org.gnome.shell.enabled-extensions org.gnome.shell.development-tools For more information on dconf and GSettings , see Chapter 9, Configuring Desktop with GSettings and dconf . 11.1. What Are GNOME Shell Extensions? GNOME Shell extensions allow for the customization of the default GNOME Shell interface and its parts, such as window management and application launching. Each GNOME Shell extension is identified by a unique identifier, the uuid. The uuid is also used for the name of the directory where an extension is installed. You can either install the extension per-user in ~/.local/share/gnome-shell/extensions/ uuid , or machine-wide in /usr/share/gnome-shell/extensions/ uuid . The uuid identifier is globally-unique. When choosing it, remember that the uuid must possess the following properties to prevent certain attacks: Your uuid must not contain Unicode characters. Your uuid must not contain the gnome.org ending as it must not appear to be affiliated with the GNOME Project. Your uuid must contain only alphanumerical characters, the period (.), the at symbol (@), and the underscore (_). Important Before deploying third-party GNOME Shell extensions on Red Hat Enterprise Linux, make sure to read the following document to learn about the Red Hat support policy for third-party software: How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors? To view installed extensions, you can use Looking Glass , GNOME Shell's integrated debugger and inspector tool. Procedure 11.1. View installed extensions Press Alt + F2 . Type in lg and press Enter to open Looking Glass . On the top bar of Looking Glass , click Extensions to open the list of installed extensions. Figure 11.1. Viewing Installed extensions with Looking Glass | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/GNOME-shell-extensions |
Chapter 3. Customizing workspace components | Chapter 3. Customizing workspace components To customize workspace components: Choose a Git repository for your workspace . Use a devfile . Configure an IDE . Add OpenShift Dev Spaces specific attributes in addition to the generic devfile specification. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.19/html/user_guide/customizing-workspace-components |
Chapter 2. Preparing to register RHEL systems | Chapter 2. Preparing to register RHEL systems As you and other members of your Red Hat organization begin purchasing multiple subscriptions, installing software, and registering systems, the tasks required to manage these subscriptions across system deployments in physical, virtual, and cloud environments can become increasingly complex. Red Hat provides additional process and tooling options beyond the system registration tools to help with these tasks. If your organization is an established Red Hat customer, review your current tooling to ensure that you are taking advantage of the latest subscription experience. If your organization is a new Red Hat customer, some of this tooling is built in as your default subscription experience. Other tooling is optional, but recommended, to help you manage your environment. Review information about Red Hat accounts, so that your subscriptions and systems are associated with the correct accounts for your use cases. Review information about simple content access, so that you can enable the simplified "register and run" subscription experience that removes the need for complex system-level subscription attachment. Review information about the subscriptions service, so that you can use that service to gain an account-level view of current and historical subscription usage. Review information about system purpose attributes and values, so that you can match your subscriptions with use case information that enriches subscriptions service data and helps you understand subscription utilization across your account. 2.1. Your Red Hat account To register your Red Hat Enterprise Linux (RHEL) systems and access the content associated with your subscriptions, you must log in to Red Hat with an account that is associated with those subscriptions. A Red Hat account is used to identify and authenticate you to Red Hat. It provides you with access to Red Hat applications and services, purchasing capabilities, communities, support, information, and other benefits. Red Hat accounts are available in two different types: A corporate account that enables a set of users, such as system administrators, purchasing agents, IT management, and so on, to centrally purchase subscriptions and administer systems within a corporation or within a corporate organizational structure such as a function or division. A personal account that is for a single user to purchase their own subscriptions and administer their own systems. If you meet any of the following criteria, you already have a Red Hat account: You are part of a Red Hat corporate account and organization for your company, and an Organization Administrator has already created a Red Hat account for you within that organization. You have previously purchased a Red Hat subscription. You have already visited the Hybrid Cloud Console web page or other Red Hat web pages to create an account. It is possible that you could have both a corporate and a personal account, and that you use each for different purposes. It is also possible that you are unsure which type of account to use to register systems and install subscriptions, or even if you have an account. However, if your company is using Red Hat software to power enterprise-level solutions, it is likely that it has one or more corporate accounts and organizations to acquire and manage that software. If you need more information about Red Hat accounts and how they should be used to register systems, you should first discuss your options with internal contacts in your company before proceeding with Red Hat account creation. If you have additional questions, contact Red Hat customer service for assistance. Additional resources For more information about the status of any Red Hat accounts that you own, contact Red Hat customer service . For more information about Red Hat accounts, see the "How to Create a New Red Hat Login ID and Account" Customer Portal article. 2.2. Simple content access enablement Simple content access provides an improved subscription experience that removes many of the time-consuming and complex business processes associated with the older Red Hat entitlement-driven enforcement model. The simple content access tool removes the need to use an entitlement to attach a subscription to a system before you can access Red Hat subscription content on that system. In the entitlement-based subscription model, an entitlement is one of a predefined number of allowances that is used during the registration process to assign, or attach, a subscription to a system. The entitlement-based subscription model is now deprecated and is superseded by the access-based subscription model of simple content access. In the access-based subscription model, access to subscription content is provided by the existence of a valid subscription and registration of the system. Note The entitlement-based subscription model is no longer the default subscription mode, is currently deprecated, and will be retired in the future. Red Hat accounts that are still using the entitlement-based subscription model should begin working with their Red Hat account team, such as a technical account manager (TAM) or solution architect (SA), to answer questions or prepare for migration to simple content access. By using simple content access, you can more easily consume subscription content and reduce the complexity of your subscription management workflow. Instead, if you have access to a valid subscription, you can register a system and then consume the subscription content on that system, in a process that is commonly referred to as the "register and run" experience. If your organization uses the subscription management capabilities of Red Hat Subscription Management to manage your systems and subscriptions, an Organization Administrator for your Red Hat account can enable simple content access from Red Hat Subscription Management in the Red Hat Customer Portal. As of 15 July 2022, simple content access is enabled by default for all new Red Hat accounts. If your organization uses Red Hat Satellite version 6.12 or earlier, a Satellite administrator can enable simple content access from the manifests management tool that is available in Red Hat Hybrid Cloud Console. The manifest can then be used to apply simple content access at the Satellite organization level. For newly created manifests, simple content access is enabled by default. If your organization uses Red Hat Satellite version 6.13, a Satellite administrator can enable simple content access in the web user interface for Satellite. For newly created Satellite organizations, simple content access is enabled by default. Although currently you can still change the setting of the applied manifest in the Hybrid Cloud Console, the setting on the Satellite organization always overrides the setting on the manifest. The subscriptions service and simple content access are designed to work together to simplify and streamline the overall subscription experience. By removing the need to attach subscriptions at the system level, simple content access reduces complexity and saves time when you are adding, removing, and renewing subscriptions. By offering visibility into subscription usage, the subscriptions service eliminates manual subscription management and enables account-wide governance of your subscriptions. Additional resources To learn more about simple content access, see the Getting Started with Simple Content Access guide. For information about how to enable simple content access, see Activating simple content access . For additional technical information about simple content access, including a deep-dive before-and-after comparison of the subscription workflow and links to instructional videos, see the "Simple Content Access" Customer Portal article. 2.3. Subscriptions service enablement The subscriptions service in the Red Hat Hybrid Cloud Console is a dashboard-based, Software-as-a-Service (SaaS) application that enables you to view subscription usage in your Red Hat account. It provides a visual representation of that usage over time across your hybrid infrastructure, including physical and virtual technology deployments; on-premise and cloud environments; and cluster, instance, and workload use cases. From the subscriptions service dashboard, you have an account-level view of current and historical subscription usage, along with the remaining capacity for growth and scaling. You also have a view of the subscriptions that are in use in the account and of the systems or other entities that are using those subscriptions. The account-level view of the subscriptions service dashboard can be shared within an organization among procurement personnel, system administrators, IT administrators, and operators for collaborative management of subscriptions, from purchasing and renewals to deployment decisions. The subscriptions service and simple content access are designed to work together to simplify and streamline the overall subscription experience. By removing the need to attach subscriptions at the system level, simple content access reduces complexity and saves time when you are adding, removing, and renewing subscriptions. By offering visibility into subscription usage, the subscriptions service eliminates manual subscription management and enables account-wide governance of your subscriptions. If your organization is not already using the subscriptions service, some steps are required to begin using it. Activating the subscriptions service You must activate the subscriptions service for the organization so that the service can begin collecting and displaying data. Activation can be either manual or automated if certain types of subscription purchases are made. If the subscriptions service is not active, any user in the organization can activate it. After activation, it can take up to 24 hours for certain types of data to begin appearing in the subscriptions service. Setting up the data collection tools The subscriptions service relies on data that is collected from several other tools that act as data sources. To report Red Hat Enterprise Linux usage, the subscriptions service can use data from the subscription management tools of Red Hat Subscription Management, from Red Hat Satellite, and from Red Hat Insights. You can use one or all of these tools for data collection, according to needs of your IT environment. In addition, data collection related to host-guest mappings requires data from the virt-who tool and the Satellite inventory upload plugin. Additional resources To learn more about the subscriptions service, see the Getting Started with the Subscriptions Service guide. For more information about activating the subscriptions service, see Activating and opening the subscriptions service . For more information about which data collection tools you should use, see How to select the right data collection tool . For more information about configuring the data collection tools, see Setting up the subscriptions service for data collection . For additional technical information about the subscriptions service, including an analysis of sample subscriptions service data and and links to instructional videos, see the "Subscription Watch" Customer Portal article. 2.4. System purpose configuration When you begin deploying subscriptions, it is important for the different personas in your organization to understand how and where those subscriptions are being used. Operations personas, including IT administrators and system administrators, need to build and manage systems to run specific workloads. Procurement personas need to manage purchases by balancing the account's subscription footprint with current and future business needs. The setting of use case data on a Red Hat Enterprise Linux (RHEL) system to record its intended use is done through a set of attributes that are collectively known as system purpose. Note System purpose attributes might be known by a different name in other Red Hat products. Collectively, they can also be known as subscription attributes. System purpose attributes include the following types of information: Technical use case information, such as workload information Business use case information, such as the IT environment, which determines the scope of support needed for that environment Operational use case information, such as the service level The following default values are available for each RHEL system purpose attribute: Role (technical use case) Red Hat Enterprise Linux Server Red Hat Enterprise Linux Workstation Red Hat Enterprise Linux Compute Node Usage (business use case) Production Development/Test Disaster Recovery Service Level Agreement (operational use case) Premium Standard Self-Support These system purpose attribute values help operators guide workloads to the correct systems and help procurement personnel filter and analyze system usage in tools such as the subscriptions service to make more informed purchasing decisions. You can set system purpose values during multiple phases of the system life cycle, enabling your organization to set these values at the most appropriate point in your process. You can set system purpose values at build time, when you are creating installable images for your subscription content, at connection time, during installation and registration tasks, or at runtime, when you begin using the content. For example: During activation key creation During image creation by configuring an image builder image with an embedded activation key that has includes system purpose values During a GUI installation when using the Connect to Red Hat options to register your system During a Kickstart installation when using the syspurpose Kickstart command After installation using the subscription-manager command-line interface tool Additional resources To configure system purpose with an activation key, see Creating an activation key . To configure system purpose for RHEL 9 with Subscription Manager, see Configuring System Purpose using the subscription-manager command-line tool in Performing a standard RHEL 9 installation. To configure system purpose for RHEL with Kickstart, see Configuring System Purpose in a Kickstart file in Performing an advanced RHEL 9 installation. To configure system purpose for RHEL with Subscription Manager, see Configuring System Purpose in Performing an advanced RHEL 8 installation. To configure system purpose for RHEL with Kickstart, see Configuring System Purpose in a Kickstart file in Performing an advanced RHEL 8 installation. | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_rhel_system_registration/prep-reg-rhel |
Chapter 14. Creating exports using NFS | Chapter 14. Creating exports using NFS This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster. Follow the instructions below to create exports and access them externally from the OpenShift Cluster: Section 14.1, "Enabling the NFS feature" Section 14.2, "Creating NFS exports" Section 14.3, "Consuming NFS exports in-cluster" Section 14.4, "Consuming NFS exports externally from the OpenShift cluster" 14.1. Enabling the NFS feature To use NFS feature, you need to enable it in the storage cluster using the command-line interface (CLI) after the cluster is created. You can also enable the NFS feature while creating the storage cluster using the user interface. Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. The OpenShift Data Foundation installation includes a CephFilesystem. Procedure Run the following command to enable the NFS feature from CLI: Verification steps NFS installation and configuration is complete when the following conditions are met: The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready . Check if all the csi-nfsplugin-* pods are running: Output has multiple pods. For example: 14.2. Creating NFS exports NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass. You can create NFS PVCs two ways: Create NFS PVC using a yaml. The following is an example PVC. Note volumeMode: Block will not work for NFS volumes. <desired_name> Specify a name for the PVC, for example, my-nfs-export . The export is created once the PVC reaches the Bound state. Create NFS PVCs from the OpenShift Container Platform web console. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster. Procedure In the OpenShift Web Console, click Storage Persistent Volume Claims Set the Project to openshift-storage . Click Create PersistentVolumeClaim . Specify Storage Class , ocs-storagecluster-ceph-nfs . Specify the PVC Name , for example, my-nfs-export . Select the required Access Mode . Specify a Size as per application requirement. Select Volume mode as Filesystem . Note: Block mode is not supported for NFS PVCs Click Create and wait until the PVC is in Bound status. 14.3. Consuming NFS exports in-cluster Kubernetes application pods can consume NFS exports created by mounting a previously created PVC. You can mount the PVC one of two ways: Using a YAML: Below is an example pod that uses the example PVC created in Section 14.2, "Creating NFS exports" : <pvc_name> Specify the PVC you have previously created, for example, my-nfs-export . Using the OpenShift Container Platform web console. Procedure On the OpenShift Container Platform web console, navigate to Workloads Pods . Click Create Pod to create a new application pod. Under the metadata section add a name. For example, nfs-export-example , with namespace as openshift-storage . Under the spec: section, add containers: section with image and volumeMounts sections: For example: Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod: For example: 14.4. Consuming NFS exports externally from the OpenShift cluster NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC. Procedure After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the step: For example: Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation. Replace <my-nfs> with the value you got in step 1. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the step. Get the share path from the PV. Get the name of the PV associated with the NFS export's PVC: Replace <pvc_name> with your own PVC name. For example: Use the PV name obtained previously to get the NFS export's share path: Get an ingress address for the NFS server. A service's ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com . Connect the external client using the share path and ingress address from the steps. The following example mounts the export to the client's directory path /export/mount/path : If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server. | [
"oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{\"spec\": {\"nfs\":{\"enable\": true}}}'",
"-n openshift-storage describe cephnfs ocs-storagecluster-cephnfs",
"-n openshift-storage get pod | grep csi-nfsplugin",
"csi-nfsplugin-47qwq 2/2 Running 0 10s csi-nfsplugin-77947 2/2 Running 0 10s csi-nfsplugin-ct2pm 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-2rm2w 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-8nj5h 2/2 Running 0 10s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <desired_name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-nfs",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: <pvc_name> readOnly: false",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: <volume_name> mountPath: /var/lib/www/html",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: my-nfs-export",
"oc get pods -n openshift-storage | grep rook-ceph-nfs",
"oc describe pod <name of the rook-ceph-nfs pod> | grep ceph_nfs",
"oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs ceph_nfs=my-nfs",
"apiVersion: v1 kind: Service metadata: name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer namespace: openshift-storage spec: ports: - name: nfs port: 2049 type: LoadBalancer externalTrafficPolicy: Local selector: app: rook-ceph-nfs ceph_nfs: <my-nfs> instance: a",
"oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}' /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215",
"oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}' [{\"hostname\":\"ingress-id.somedomain.com\"}]",
"mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/creating-exports-using-nfs_rhodf |
16.4. Migrating from the Synchronization-Based to the Trust-Based Solution | 16.4. Migrating from the Synchronization-Based to the Trust-Based Solution ID views can be used to migrate from the synchronization-based integration to the trust-based integration. The migration can be performed on the IdM server and is described in the Windows Integration Guide for Red Hat Enterprise Linux 7 . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/id-view-migration |
Considerations in adopting RHEL 8 | Considerations in adopting RHEL 8 Red Hat Enterprise Linux 8 Key differences between Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/index |
Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator | Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator Red Hat OpenStack Platform 17.1 Using director Operator to deploy and manage a Red Hat OpenStack Platform overcloud in a Red Hat OpenShift Container Platform OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index |
Chapter 1. Introduction to Red Hat Certified Cloud and Service Provider Certification policies | Chapter 1. Introduction to Red Hat Certified Cloud and Service Provider Certification policies 1.1. Audience Use this guide to understand the technical and operational certification requirements as implemented for CCSP partners who want to offer Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), or a managed service based on Red Hat Enterprise Linux. The certification tools and methodologies cater to cloud application images built on Red Hat Enterprise Linux. 1.2. Create value for our joint customers As a Certified Cloud and Service Provider (CCSP), you are required to certify images that you publish in a catalog. The certification process includes a series of tests that provide your Red Hat customers assurance that they will have a consistent experience across cloud providers, that the customer's experience comes with the highest level of support, and that good security practices are available to the customers. The cloud certification test suite (redhat-certification-cloud) includes three tests (supportable, configuration, security), each with a series of subtests and checks, which are explained below. Logs from a singular run with all three of the cloud tests and the test suite self check test (rhcert/selfcheck) must be submitted to Red Hat for new certifications and for recertifications. Most of the cloud certification subtests provide an immediate return status (Pass/Fail); however, some subtests may require detailed review by Red Hat to confirm success. Such tests are marked with REVIEW status in the Red Hat Certification application. Some tests may also identify a potential issue and return a WARN status. This status indicates that best practices have not been followed. Tests marked with the WARN status warrant attention or actions but do not prevent a certification from succeeding. Partners are recommended to review the output of such tests and perform appropriate actions based on the information contained within the warnings. Additional resources For more information on running the tests, see CCSP Certification Workflow Guide . 1.3. Test Suite versions You must install the latest version of the certification tooling and use the latest workflow for the certification process. After a new version of the certification tooling is released, Red Hat supports the tooling and workflow for a period of 90 days post the release. At the end of the 90 days period, test logs/results generated using the version(s) are automatically rejected and you are expected to regenerate the test logs/results using the latest tooling and workflow. The latest version of the certification tooling and workflow is available (by default) via Red Hat Subscription Management and documented in the CCSP Certification Workflow Guide . 1.4. Supported RHEL version and architecture The certifications are supported on the following RHEL version and architecture. RHEL version Architecture RHEL 9 64-bit AMD and Intel 64-bit IBM Z 64-bit ARM Little endian IBM Power systems RHEL 8 64-bit AMD and Intel 64-bit IBM Z 64-bit ARM Little endian IBM Power systems For information about hypervisor support, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization . 1.5. Understand Passthrough Certifications A passthrough certification is used when the same image is provided as a copy of an existing certified cloud certification and is listed under a different image name. You can create a passthrough regular or gold RHEL image from an originally certified regular or gold RHEL image. The policy for submitting a passthrough image certification request requires you to: Ensure that the image is a duplicate of the original certified image except for the name which might be different. As with the original image certification, it is expected that a given running image does include a certain drift from the original static on-disk image file in the form of instance-type dependent configuration data. | null | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_policy_guide/assembly-introduction-certified-cloud-service-provider_cloud-image-certification-policy |
7.3. Recovering from LVM Mirror Failure | 7.3. Recovering from LVM Mirror Failure This section provides an example of recovering from a situation where one leg of an LVM mirrored volume fails because the underlying device for a physical volume goes down and the mirror_log_fault_policy parameter is set to remove , requiring that you manually rebuild the mirror. For information on setting the mirror_log_fault_policy parameter, see Section 5.4.3.1, "Mirrored Logical Volume Failure Policy" . When a mirror leg fails, LVM converts the mirrored volume into a linear volume, which continues to operate as before but without the mirrored redundancy. At that point, you can add a new disk device to the system to use as a replacement physical device and rebuild the mirror. The following command creates the physical volumes which will be used for the mirror. The following commands creates the volume group vg and the mirrored volume groupfs . You can use the lvs command to verify the layout of the mirrored volume and the underlying devices for the mirror leg and the mirror log. Note that in the first example the mirror is not yet completely synced; you should wait until the Copy% field displays 100.00 before continuing. In this example, the primary leg of the mirror /dev/sda1 fails. Any write activity to the mirrored volume causes LVM to detect the failed mirror. When this occurs, LVM converts the mirror into a single linear volume. In this case, to trigger the conversion, we execute a dd command You can use the lvs command to verify that the device is now a linear device. Because of the failed disk, I/O errors occur. At this point you should still be able to use the logical volume, but there will be no mirror redundancy. To rebuild the mirrored volume, you replace the broken drive and recreate the physical volume. If you use the same disk rather than replacing it with a new one, you will see "inconsistent" warnings when you run the pvcreate command. You can prevent that warning from appearing by executing the vgreduce --removemissing command. you extend the original volume group with the new physical volume. Convert the linear volume back to its original mirrored state. You can use the lvs command to verify that the mirror is restored. | [
"pvcreate /dev/sd[abcdefgh][12] Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sda2\" successfully created Physical volume \"/dev/sdb1\" successfully created Physical volume \"/dev/sdb2\" successfully created Physical volume \"/dev/sdc1\" successfully created Physical volume \"/dev/sdc2\" successfully created Physical volume \"/dev/sdd1\" successfully created Physical volume \"/dev/sdd2\" successfully created Physical volume \"/dev/sde1\" successfully created Physical volume \"/dev/sde2\" successfully created Physical volume \"/dev/sdf1\" successfully created Physical volume \"/dev/sdf2\" successfully created Physical volume \"/dev/sdg1\" successfully created Physical volume \"/dev/sdg2\" successfully created Physical volume \"/dev/sdh1\" successfully created Physical volume \"/dev/sdh2\" successfully created",
"vgcreate vg /dev/sd[abcdefgh][12] Volume group \"vg\" successfully created lvcreate -L 750M -n groupfs -m 1 vg /dev/sda1 /dev/sdb1 /dev/sdc1 Rounding up size to full physical extent 752.00 MB Logical volume \"groupfs\" created",
"lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices groupfs vg mwi-a- 752.00M groupfs_mlog 21.28 groupfs_mimage_0(0),groupfs_mimage_1(0) [groupfs_mimage_0] vg iwi-ao 752.00M /dev/sda1(0) [groupfs_mimage_1] vg iwi-ao 752.00M /dev/sdb1(0) [groupfs_mlog] vg lwi-ao 4.00M /dev/sdc1(0) lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices groupfs vg mwi-a- 752.00M groupfs_mlog 100.00 groupfs_mimage_0(0),groupfs_mimage_1(0) [groupfs_mimage_0] vg iwi-ao 752.00M /dev/sda1(0) [groupfs_mimage_1] vg iwi-ao 752.00M /dev/sdb1(0) [groupfs_mlog] vg lwi-ao 4.00M i /dev/sdc1(0)",
"dd if=/dev/zero of=/dev/vg/groupfs count=10 10+0 records in 10+0 records out",
"lvs -a -o +devices /dev/sda1: read failed after 0 of 2048 at 0: Input/output error /dev/sda2: read failed after 0 of 2048 at 0: Input/output error LV VG Attr LSize Origin Snap% Move Log Copy% Devices groupfs vg -wi-a- 752.00M /dev/sdb1(0)",
"pvcreate /dev/sdi[12] Physical volume \"/dev/sdi1\" successfully created Physical volume \"/dev/sdi2\" successfully created pvscan PV /dev/sdb1 VG vg lvm2 [67.83 GB / 67.10 GB free] PV /dev/sdb2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdc1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdc2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdd1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdd2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sde1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sde2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdf1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdf2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdi1 lvm2 [603.94 GB] PV /dev/sdi2 lvm2 [603.94 GB] Total: 16 [2.11 TB] / in use: 14 [949.65 GB] / in no VG: 2 [1.18 TB]",
"vgextend vg /dev/sdi[12] Volume group \"vg\" successfully extended pvscan PV /dev/sdb1 VG vg lvm2 [67.83 GB / 67.10 GB free] PV /dev/sdb2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdc1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdc2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdd1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdd2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sde1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sde2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdf1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdf2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdi1 VG vg lvm2 [603.93 GB / 603.93 GB free] PV /dev/sdi2 VG vg lvm2 [603.93 GB / 603.93 GB free] Total: 16 [2.11 TB] / in use: 16 [2.11 TB] / in no VG: 0 [0 ]",
"lvconvert -m 1 /dev/vg/groupfs /dev/sdi1 /dev/sdb1 /dev/sdc1 Logical volume mirror converted.",
"lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices groupfs vg mwi-a- 752.00M groupfs_mlog 68.62 groupfs_mimage_0(0),groupfs_mimage_1(0) [groupfs_mimage_0] vg iwi-ao 752.00M /dev/sdb1(0) [groupfs_mimage_1] vg iwi-ao 752.00M /dev/sdi1(0) [groupfs_mlog] vg lwi-ao 4.00M /dev/sdc1(0)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mirrorrecover |
Chapter 9. Image configuration resources | Chapter 9. Image configuration resources Use the following procedure to configure image registries. 9.1. Image controller configuration parameters The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . Its spec offers the following configuration parameters. Note Parameters such as DisableScheduledImport , MaxImagesBulkImportedPerRepository , MaxScheduledImportsPerMinute , ScheduledImageImportMinimumIntervalSeconds , InternalRegistryHostname are not configurable. Parameter Description allowedRegistriesForImport Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. Every element of this list contains a location of the registry specified by the registry domain name. domainName : Specifies a domain name for the registry. If the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well. insecure : Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure. additionalTrustedCA A reference to a config map containing additional CAs that should be trusted during image stream import , pod image pull , openshift-image-registry pullthrough , and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. externalRegistryHostnames Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format. registrySources Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry. insecureRegistries : Registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . blockedRegistries : Registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are allowed. allowedRegistries : Registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are blocked. containerRuntimeSearchRegistries : Registries for which image pull and push actions are allowed using image short names. All other registries are blocked. Either blockedRegistries or allowedRegistries can be set, but not both. Warning When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster. Parameter Description internalRegistryHostname Set by the Image Registry Operator, which controls the internalRegistryHostname . It sets the hostname for the default OpenShift image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable. externalRegistryHostnames Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format. 9.2. Configuring image registry settings You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). When changes to the registry are applied to the image.config.openshift.io/cluster CR, the Machine Config Operator (MCO) performs the following sequential actions: Cordons the node Applies changes by restarting CRI-O Uncordons the node Note The MCO does not restart nodes when it detects changes. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Image : Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . 2 allowedRegistriesForImport : Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. 3 additionalTrustedCA : A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust. 4 registrySources : Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources parameter does not contain configuration for the internal cluster registry. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list. When using the allowedRegistries , blockedRegistries , or insecureRegistries parameter, you can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . Insecure external registries should be avoided to reduce possible security risks. To check that the changes are applied, list your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.25.4+77bec7a ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.25.4+77bec7a ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.25.4+77bec7a ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.25.4+77bec7a ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.25.4+77bec7a ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.25.4+77bec7a 9.2.1. Adding specific registries You can add a list of registries, and optionally an individual repository within a registry, that are permitted for image pull and push actions by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the allowedRegistries parameter, the container runtime searches only those registries. Registries not in the list are blocked. Warning When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an allowed list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, to use for image pull and push actions. All other registries are blocked. Note Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, the allowed registries list is used to update the image signature policy in the /etc/containers/policy.json file on each node. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat /etc/containers/policy.json | jq '.' The following policy indicates that only images from the example.com, quay.io, and registry.redhat.io registries are permitted for image pulls and pushes: Example 9.1. Example image signature policy file { "default":[ { "type":"reject" } ], "transports":{ "atomic":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker-daemon":{ "":[ { "type":"insecureAcceptAnything" } ] } } } Note If your cluster uses the registrySources.insecureRegistries parameter, ensure that any insecure registries are included in the allowed list. For example: spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000 9.2.2. Blocking specific registries You can block any registry, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the blockedRegistries parameter, the container runtime does not search those registries. All other registries are allowed. Warning To prevent pod failure, do not add the registry.redhat.io and quay.io registries to the blockedRegistries list, as they are required by payload images within your environment. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with a blocked list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, that should not be used for image pull and push actions. All other registries are allowed. Note Either the blockedRegistries registry or the allowedRegistries registry can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, changes to the blocked registries appear in the /etc/containers/registries.conf file on each node. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat etc/containers/registries.conf The following example indicates that images from the untrusted.com registry are prevented for image pulls and pushes: Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "untrusted.com" blocked = true 9.2.2.1. Blocking a payload registry In a mirroring configuration, you can block upstream payload registries in a disconnected environment using a ImageContentSourcePolicy (ICSP) object. The following example procedure demonstrates how to block the quay.io/openshift-payload payload registry. Procedure Create the mirror configuration using an ImageContentSourcePolicy (ICSP) object to mirror the payload to a registry in your instance. The following example ICSP file mirrors the payload internal-mirror.io/openshift-payload : apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload After the object deploys onto your nodes, verify that the mirror configuration is set by checking the /etc/containers/registries.conf file: Example output [[registry]] prefix = "" location = "quay.io/openshift-payload" mirror-by-digest-only = true [[registry.mirror]] location = "internal-mirror.io/openshift-payload" Use the following command to edit the image.config.openshift.io custom resource file: USD oc edit image.config.openshift.io cluster To block the payload registry, add the following configuration to the image.config.openshift.io custom resource file: spec: registrySource: blockedRegistries: - quay.io/openshift-payload Verification Verify that the upstream payload registry is blocked by checking the /etc/containers/registries.conf file on the node. Example output [[registry]] prefix = "" location = "quay.io/openshift-payload" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = "internal-mirror.io/openshift-payload" 9.2.3. Allowing insecure registries You can add insecure registries, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Registries that do not use valid SSL certificates or do not require HTTPS connections are considered insecure. Warning Insecure external registries should be avoided to reduce possible security risks. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an insecure registries list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify an insecure registry. You can specify a repository in that registry. 3 Ensure that any insecure registries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries, then drains and uncordons the nodes when it detects changes. After the nodes return to the Ready state, changes to the insecure and blocked registries appear in the /etc/containers/registries.conf file on each node. Verification To check that the registries have been added to the policy file, use the following command on a node: USD cat /etc/containers/registries.conf The following example indicates that images from the insecure.com registry is insecure and is allowed for image pulls and pushes. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "insecure.com" insecure = true 9.2.4. Adding registries that allow image short names You can add registries to search for an image short name by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. An image short name enables you to search for images without including the fully qualified domain name in the pull spec. For example, you could use rhel7/etcd instead of registry.access.redhat.com/rhe7/etcd . You might use short names in situations where using the full path is not practical. For example, if your cluster references multiple internal registries whose DNS changes frequently, you would need to update the fully qualified domain names in your pull specs with each change. In this case, using an image short name might be beneficial. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the containerRuntimeSearchRegistries parameter, when pulling an image with a short name, the container runtime searches those registries. Warning Using image short names with public registries is strongly discouraged because the image might not deploy if the public registry requires authentication. Use fully-qualified image names with public registries. Red Hat internal or private registries typically support the use of image short names. If you list public registries under the containerRuntimeSearchRegistries parameter, you expose your credentials to all the registries on the list and you risk network and registry attacks. You cannot list multiple public registries under the containerRuntimeSearchRegistries parameter if each public registry requires different credentials and a cluster does not list the public registry in the global pull secret. For a public registry that requires authentication, you can use an image short name only if the registry has its credentials stored in the global pull secret. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, if the containerRuntimeSearchRegistries parameter is added, the MCO creates a file in the /etc/containers/registries.conf.d directory on each node with the listed registries. The file overrides the default list of unqualified search registries in the /etc/containers/registries.conf file. There is no way to fall back to the default list of unqualified search registries. The containerRuntimeSearchRegistries parameter works only with the Podman and CRI-O container engines. The registries in the list can be used only in pod specs, not in builds and image streams. Procedure Edit the image.config.openshift.io/cluster CR: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 ... status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Specify registries to use with image short names. You should use image short names with only internal or private registries to reduce possible security risks. 2 Ensure that any registries listed under containerRuntimeSearchRegistries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use this parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf Example output unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io'] 9.2.5. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 9.2.6. Configuring image registry repository mirroring Setting up container registry repository mirroring enables you to do the following: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. The attributes of repository mirroring in OpenShift Container Platform include: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a datacenter that is in a disconnected environment. After OpenShift Container Platform installation: Even if you don't configure mirroring during OpenShift Container Platform installation, you can do so later using the ImageContentSourcePolicy object. The following procedure provides a post-installation mirror configuration, where you create an ImageContentSourcePolicy object that identifies: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. Note You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy object. You cannot add a pull secret to a project. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source directory to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy \ docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi8/ubi-minimal image from registry.access.redhat.com . After you create the registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Log in to your OpenShift Container Platform cluster. Create an ImageContentSourcePolicy file (for example, registryrepomirror.yaml ), replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8 1 Indicates the name of the image registry and repository. 2 Indicates multiple mirror repositories for each target repository. If one mirror is down, the target repository can use another mirror. 3 Indicates the registry and repository containing the content that is mirrored. 4 You can configure a namespace inside a registry to use any image in that namespace. If you use a registry domain as a source, the ImageContentSourcePolicy resource is applied to all repositories from the registry. 5 If you configure the registry name, the ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry. 6 Pulls the image mirror.example.net/image@sha256:... . 7 Pulls the image myimage in the source registry namespace from the mirror mirror.example.net/myimage@sha256:... . 8 Pulls the image registry.example.com/example/myimage from the mirror registry mirror.example.net/registry-example-com/example/myimage@sha256:... . The ImageContentSourcePolicy resource is applied to all repositories from a source registry to a mirror registry mirror.example.net/registry-example-com . Create the new ImageContentSourcePolicy object: USD oc create -f registryrepomirror.yaml After the ImageContentSourcePolicy object is created, the new settings are deployed to each node and the cluster starts using the mirrored repository for requests to the source repository. To check that the mirrored configuration settings, are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0 The Imagecontentsourcepolicy resource does not restart the nodes. Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi8/ubi-minimal" mirror-by-digest-only = true [[registry.mirror]] location = "example.io/example/ubi-minimal" [[registry.mirror]] location = "example.com/example/ubi-minimal" [[registry]] prefix = "" location = "registry.example.com" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net/registry-example-com" [[registry]] prefix = "" location = "registry.example.com/example" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net" [[registry]] prefix = "" location = "registry.example.com/example/myimage" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.net/image" [[registry]] prefix = "" location = "registry.redhat.io" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.com" [[registry]] prefix = "" location = "registry.redhat.io/openshift4" mirror-by-digest-only = true [[registry.mirror]] location = "mirror.example.com/redhat" Pull an image digest to the node from the source and check if it is resolved by the mirror. ImageContentSourcePolicy objects support image digests only, not image tags. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. Additional resources For more information about global pull secrets, see Updating the global cluster pull secret . | [
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.25.4+77bec7a ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.25.4+77bec7a ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.25.4+77bec7a ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.25.4+77bec7a ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.25.4+77bec7a ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.25.4+77bec7a",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat /etc/containers/policy.json | jq '.'",
"{ \"default\":[ { \"type\":\"reject\" } ], \"transports\":{ \"atomic\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker-daemon\":{ \"\":[ { \"type\":\"insecureAcceptAnything\" } ] } } }",
"spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"untrusted.com\" blocked = true",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload",
"[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"",
"oc edit image.config.openshift.io cluster",
"spec: registrySource: blockedRegistries: - quay.io/openshift-payload",
"[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"insecure.com\" insecure = true",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf",
"unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io']",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi8/ubi-minimal\" mirror-by-digest-only = true [[registry.mirror]] location = \"example.io/example/ubi-minimal\" [[registry.mirror]] location = \"example.com/example/ubi-minimal\" [[registry]] prefix = \"\" location = \"registry.example.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/image\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com/redhat\"",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/images/image-configuration |
2.2. Fencing Overview | 2.2. Fencing Overview In a cluster system, there can be many nodes working on several pieces of vital production data. Nodes in a busy, multi-node cluster could begin to act erratically or become unavailable, prompting action by administrators. The problems caused by errant cluster nodes can be mitigated by establishing a fencing policy. Fencing is the disconnection of a node from the cluster's shared storage. Fencing cuts off I/O from shared storage, thus ensuring data integrity. The cluster infrastructure performs fencing through the STONITH facility. When Pacemaker determines that a node has failed, it communicates to other cluster-infrastructure components that the node has failed. STONITH fences the failed node when notified of the failure. Other cluster-infrastructure components determine what actions to take, which includes performing any recovery that needs to done. For example, DLM and GFS2, when notified of a node failure, suspend activity until they detect that STONITH has completed fencing the failed node. Upon confirmation that the failed node is fenced, DLM and GFS2 perform recovery. DLM releases locks of the failed node; GFS2 recovers the journal of the failed node. Node-level fencing through STONITH can be configured with a variety of supported fence devices, including: Uninterruptible Power Supply (UPS) - a device containing a battery that can be used to fence devices in event of a power failure Power Distribution Unit (PDU) - a device with multiple power outlets used in data centers for clean power distribution as well as fencing and power isolation services Blade power control devices - dedicated systems installed in a data center configured to fence cluster nodes in the event of failure Lights-out devices - Network-connected devices that manage cluster node availability and can perform fencing, power on/off, and other services by administrators locally or remotely | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/s1-fencing-HAAO |
Chapter 104. Password schema reference | Chapter 104. Password schema reference Used in: KafkaUserScramSha512ClientAuthentication Property Description valueFrom Secret from which the password should be read. PasswordSource | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-Password-reference |
Chapter 13. Volume Snapshots | Chapter 13. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 13.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 13.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 13.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-snapshots_osp |
7.98. krb5-auth-dialog | 7.98. krb5-auth-dialog 7.98.1. RHBA-2015:0812 - krb5-auth-dialog bug fix update Updated krb5-auth-dialog packages that fix one bug are now available for Red Hat Enterprise Linux 6. Kerberos is a networked authentication system which allows clients and servers to authenticate to each other with the help of a trusted third party, the Kerberos key distribution center. The krb5-auth-dialog packages contain a dialog that warns the user when their Kerberos credentials are about to expire and allows them to renew them. Bug Fix BZ# 848026 Previously, users could experience a disproportionate increase in memory utilization by krb5-auth-dialog after being logged in on VMware virtual machines for longer periods of time. To fix this bug, a patch has been applied. Now, the krb5-auth-dialog memory leak no longer occurs in this situation. Users of krb5-auth-dialog are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-krb5-auth-dialog |
Chapter 2. Configuring the Compute service (nova) | Chapter 2. Configuring the Compute service (nova) As a cloud administrator, you use environment files to customize the Compute (nova) service. Puppet generates and stores this configuration in the /var/lib/config-data/puppet-generated/<nova_container>/etc/nova/nova.conf file. Use the following configuration methods to customize the Compute service configuration, in the following order of precedence: Heat parameters - as detailed in the Compute (nova) Parameters section in the Overcloud Parameters guide. The following example uses heat parameters to set the default scheduler filters, and configure an NFS backend for the Compute service: Puppet parameters - as defined in /etc/puppet/modules/nova/manifests/* : Note Only use this method if an equivalent heat parameter does not exist. Manual hieradata overrides - for customizing parameters when no heat or Puppet parameter exists. For example, the following sets the timeout_nbd in the [DEFAULT] section on the Compute role: Warning If a heat parameter exists, use it instead of the Puppet parameter. If a Puppet parameter exists, but not a heat parameter, use the Puppet parameter instead of the manual override method. Use the manual override method only if there is no equivalent heat or Puppet parameter. Tip Follow the guidance in Identifying parameters that you want to modify to determine if a heat or Puppet parameter is available for customizing a particular configuration. For more information about how to configure overcloud services, see Heat parameters in the Advanced Overcloud Customization guide. 2.1. Configuring memory for overallocation When you use memory overcommit ( NovaRAMAllocationRatio >= 1.0), you need to deploy your overcloud with enough swap space to support the allocation ratio. Note If your NovaRAMAllocationRatio parameter is set to < 1 , follow the RHEL recommendations for swap size. For more information, see Recommended system swap space in the RHEL Managing Storage Devices guide. Prerequisites You have calculated the swap size your node requires. For more information, see Calculating swap size . Procedure Copy the /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml file to your environment file directory: Configure the swap size by adding the following parameters to your enable-swap.yaml file: Add the enable_swap.yaml environment file to the stack with your other environment files and deploy the overcloud: 2.2. Calculating reserved host memory on Compute nodes To determine the total amount of RAM to reserve for host processes, you need to allocate enough memory for each of the following: The resources that run on the host, for example, OSD consumes 3 GB of memory. The emulator overhead required to host instances. The hypervisor for each instance. After you calculate the additional demands on memory, use the following formula to help you determine the amount of memory to reserve for host processes on each node: Replace vm_no with the number of instances. Replace avg_instance_size with the average amount of memory each instance can use. Replace overhead with the hypervisor overhead required for each instance. Replace resource1 and all resources up to <resourcen> with the number of a resource type on the node. Replace resource_ram with the amount of RAM each resource of this type requires. 2.3. Calculating swap size The allocated swap size must be large enough to handle any memory overcommit. You can use the following formulas to calculate the swap size your node requires: overcommit_ratio = NovaRAMAllocationRatio - 1 Minimum swap size (MB) = (total_RAM * overcommit_ratio) + RHEL_min_swap Recommended (maximum) swap size (MB) = total_RAM * (overcommit_ratio + percentage_of_RAM_to_use_for_swap) The percentage_of_RAM_to_use_for_swap variable creates a buffer to account for QEMU overhead and any other resources consumed by the operating system or host services. For instance, to use 25% of the available RAM for swap, with 64GB total RAM, and NovaRAMAllocationRatio set to 1 : Recommended (maximum) swap size = 64000 MB * (0 + 0.25) = 16000 MB For information about how to calculate the NovaReservedHostMemory value, see Calculating reserved host memory on Compute nodes . For information about how to determine the RHEL_min_swap value, see Recommended system swap space in the RHEL Managing Storage Devices guide. | [
"parameter_defaults: NovaSchedulerDefaultFilters: AggregateInstanceExtraSpecsFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter NovaNfsEnabled: true NovaNfsShare: '192.0.2.254:/export/nova' NovaNfsOptions: 'context=system_u:object_r:nfs_t:s0' NovaNfsVersion: '4.2'",
"parameter_defaults: ComputeExtraConfig: nova::compute::force_raw_images: True",
"parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/timeout_nbd: value: '20'",
"cp /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml /home/stack/templates/enable-swap.yaml",
"parameter_defaults: swap_size_megabytes: <swap size in MB> swap_path: <full path to location of swap, default: /swap>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/enable-swap.yaml",
"NovaReservedHostMemory = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourcen * resource_ram))"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-the-compute-service_osp |
Chapter 5. Updated boot images | Chapter 5. Updated boot images The Machine Config Operator (MCO) uses a boot image to start a Red Hat Enterprise Linux CoreOS (RHCOS) node. By default, OpenShift Container Platform does not manage the boot image. This means that the boot image in your cluster is not updated along with your cluster. For example, if your cluster was originally created with OpenShift Container Platform 4.12, the boot image that the cluster uses to create nodes is the same 4.12 version, even if your cluster is at a later version. If the cluster is later upgraded to 4.13 or later, new nodes continue to scale with the same 4.12 image. This process could cause the following issues: Extra time to start nodes Certificate expiration issues Version skew issues To avoid these issues, you can configure your cluster to update the boot image whenever you update your cluster. By modifying the MachineConfiguration object, you can enable this feature. Currently, the ability to update the boot image is available for only Google Cloud Platform (GCP) clusters and is not supported for clusters managed by the Cluster API. Important The updating boot image feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To view the current boot image used in your cluster, examine a machine set: Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1 # ... 1 This boot image is the same as the originally-installed OpenShift Container Platform version, in this example OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. If you configure your cluster to update your boot images, the boot image referenced in your machine sets matches the current version of the cluster. 5.1. Configuring updated boot images By default, OpenShift Container Platform does not manage the boot image. You can configure your cluster to update the boot image whenever you update your cluster by modifying the MachineConfiguration object. Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. For more information, see "Enabling features using feature gates" in the Additional resources section. Procedure Edit the MachineConfiguration object, named cluster , to enable the updating of boot images by running the following command: USD oc edit MachineConfiguration cluster Optional: Configure the boot image update feature for all the machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2 1 Activates the boot image update feature. 2 Specifies that all the machine sets in the cluster are to be updated. Optional: Configure the boot image update feature for specific machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: "true" 2 1 Activates the boot image update feature. 2 Specifies that any machine set with this label is to be updated. Tip If an appropriate label is not present on the machine set, add a key/value pair by running a command similar to following: Verification View the current state of the boot image updates by viewing the machine configuration object: USD oc get machineconfiguration cluster -n openshift-machine-api -o yaml Example machine set with the boot image reference kind: MachineConfiguration metadata: name: cluster # ... status: conditions: - lastTransitionTime: "2024-09-09T13:51:37Z" 1 message: Reconciled 1 of 2 MAPI MachineSets | Reconciled 0 of 0 CAPI MachineSets | Reconciled 0 of 0 CAPI MachineDeployments reason: BootImageUpdateConfigurationAdded status: "True" type: BootImageUpdateProgressing - lastTransitionTime: "2024-09-09T13:51:37Z" 2 message: 0 Degraded MAPI MachineSets | 0 Degraded CAPI MachineSets | 0 CAPI MachineDeployments reason: BootImageUpdateConfigurationAdded status: "False" type: BootImageUpdateDegraded 1 Status of the boot image update. Cluster CAPI Operator machine sets and machine deployments are not currently supported for boot image updates. 2 Indicates if any boot image updates failed. If any of the updates fail, the Machine Config Operator is degraded. In this case, manual intervention might be required. Get the boot image version by running the following command: USD oc get machinesets <machineset_name> -n openshift-machine-api -o yaml Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: "true" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1 # ... 1 This boot image is the same as the current OpenShift Container Platform version. Additional resources Enabling features using feature gates 5.2. Disabling updated boot images To disable the updated boot image feature, edit the MachineConfiguration object to remove the managedBootImages stanza. If you disable this feature after some nodes have been created with the new boot image version, any existing nodes retain their current boot image. Turning off this feature does not rollback the nodes or machine sets to the originally-installed boot image. The machine sets retain the boot image version that was present when the feature was enabled and is not updated again when the cluster is upgraded to a new OpenShift Container Platform version in the future. Procedure Disable updated boot images by editing the MachineConfiguration object: USD oc edit MachineConfiguration cluster Remove the managedBootImages stanza: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 1 Remove the entire stanza to disable updated boot images. | [
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: \"true\" 2",
"oc label machineset.machine ci-ln-hmy310k-72292-5f87z-worker-a update-boot-image=true -n openshift-machine-api",
"oc get machineconfiguration cluster -n openshift-machine-api -o yaml",
"kind: MachineConfiguration metadata: name: cluster status: conditions: - lastTransitionTime: \"2024-09-09T13:51:37Z\" 1 message: Reconciled 1 of 2 MAPI MachineSets | Reconciled 0 of 0 CAPI MachineSets | Reconciled 0 of 0 CAPI MachineDeployments reason: BootImageUpdateConfigurationAdded status: \"True\" type: BootImageUpdateProgressing - lastTransitionTime: \"2024-09-09T13:51:37Z\" 2 message: 0 Degraded MAPI MachineSets | 0 Degraded CAPI MachineSets | 0 CAPI MachineDeployments reason: BootImageUpdateConfigurationAdded status: \"False\" type: BootImageUpdateDegraded",
"oc get machinesets <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: \"true\" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/machine_configuration/mco-update-boot-images |
Chapter 3. Managing confined and unconfined users | Chapter 3. Managing confined and unconfined users Each Linux user is mapped to an SELinux user according to the rules in the SELinux policy. Administrators can modify these rules by using the semanage login utility or by assigning Linux users directly to specific SELinux users. Therefore, a Linux user has the restrictions of the SELinux user to which it is assigned. When a Linux user that is assigned to an SELinux user launches a process, this process inherits the SELinux user's restrictions, unless other rules specify a different role or type. 3.1. Confined and unconfined users in SELinux By default, all Linux users in Red Hat Enterprise Linux, including users with administrative privileges, are mapped to the unconfined SELinux user unconfined_u . You can improve the security of the system by assigning users to SELinux confined users. The security context for a Linux user consists of the SELinux user, the SELinux role, and the SELinux type. For example: Where: user_u Is the SELinux user. user_r Is the SELinux role. user_t Is the SELinux type. After a Linux user logs in, its SELinux user cannot change. However, its type and role can change, for example, during transitions. To see the SELinux user mapping on your system, use the semanage login -l command as root: # semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * In Red Hat Enterprise Linux, Linux users are mapped to the SELinux __default__ login by default, which is mapped to the SELinux unconfined_u user. The following line defines the default mapping: Confined users are restricted by SELinux rules explicitly defined in the current SELinux policy. Unconfined users are subject to only minimal restrictions by SELinux. Confined and unconfined Linux users are subject to executable and writable memory checks, and are also restricted by MCS or MLS. To list the available SELinux users, enter the following command: Note that the seinfo command is provided by the setools-console package, which is not installed by default. If an unconfined Linux user executes an application that SELinux policy defines as one that can transition from the unconfined_t domain to its own confined domain, the unconfined Linux user is still subject to the restrictions of that confined domain. The security benefit of this is that, even though a Linux user is running unconfined, the application remains confined. Therefore, the exploitation of a flaw in the application can be limited by the policy. Similarly, we can apply these checks to confined users. Each confined user is restricted by a confined user domain. The SELinux policy can also define a transition from a confined user domain to its own target confined domain. In such a case, confined users are subject to the restrictions of that target confined domain. The main point is that special privileges are associated with the confined users according to their role. 3.2. Roles and access rights of SELinux users The SELinux policy maps each Linux user to an SELinux user. This allows Linux users to inherit the restrictions of SELinux users. You can customize the permissions for confined users in your SELinux policy according to specific needs by adjusting booleans in the policy. You can determine the current state of these booleans by using the semanage boolean -l command. To list all SELinux users, their SELinux roles, and levels and ranges for MLS and MCS, use the semanage user -l command as root . Table 3.1. Roles of SELinux users User Default role Additional roles unconfined_u unconfined_r system_r guest_u guest_r xguest_u xguest_r user_u user_r staff_u staff_r sysadm_r unconfined_r system_r sysadm_u sysadm_r root staff_r sysadm_r unconfined_r system_r system_u system_r Note that system_u is a special user identity for system processes and objects, and system_r is the associated role. Administrators must never associate this system_u user and the system_r role to a Linux user. Also, unconfined_u and root are unconfined users. For these reasons, the roles associated to these SELinux users are not included in the following table Types and access rights of SELinux roles . Each SELinux role corresponds to an SELinux type and provides specific access rights. Table 3.2. Types and access rights of SELinux roles Role Type Log in using X Window System su and sudo Execute in home directory and /tmp (default) Networking unconfined_r unconfined_t yes yes yes yes guest_r guest_t no no yes no xguest_r xguest_t yes no yes web browsers only (Mozilla Firefox, GNOME Web) user_r user_t yes no yes yes staff_r staff_t yes only sudo yes yes auditadm_r auditadm_t yes yes yes dbadm_r dbadm_r yes yes yes logadm_r logadm_t yes yes yes webadm_r webadm_r yes yes yes secadm_r secadm_t yes yes yes sysadm_r sysadm_t only when the xdm_sysadm_login boolean is on yes yes yes For more detailed descriptions of the non-administrator roles, see Confined non-administrator roles in SELinux . For more detailed descriptions of the administrator roles, see Confined administrator roles in SELinux . To list all available roles, enter the seinfo -r command: Note that the seinfo command is provided by the setools-console package, which is not installed by default. Additional resources seinfo(1) , semanage-login(8) , and xguest_selinux(8) man pages installed with the selinux-policy-doc package How to modify SELinux settings with booleans 3.3. Confined non-administrator roles in SELinux In SELinux, confined non-administrator roles grant specific sets of privileges and permissions for performing specific tasks to the Linux users assigned to them. By assigning separate confined non-administrator roles, you can assign specific privileges to individual users. This is useful in scenarios with multiple users who each have a different level of authorizations. You can also customize the permissions of SELinux roles by changing the related SELinux booleans on your system. To see the SELinux booleans and their current state, use the semanage boolean -l command as root. You can get more detailed descriptions if you install the selinux-policy-devel package. Linux users in the user_t , guest_t , and xguest_t domains can only run set user ID ( setuid ) applications if SELinux policy permits it (for example, passwd ). These users cannot run the setuid applications su and sudo , and therefore cannot use these applications to become root. By default, Linux users in the staff_t , user_t , guest_t , and xguest_t domains can execute applications in their home directories and /tmp . Applications inherit the permissions of the user that executed them. To prevent guest_t , and xguest_t users from executing applications in directories in which they have write access, set the guest_exec_content and xguest_exec_content booleans to off . SELinux has the following confined non-administrator roles, each with specific privileges and limitations: guest_r Has very limited permissions. Users assigned to this role cannot access the network, but can execute files in the /tmp and /home directories. Related boolean: xguest_r Has limited permissions. Users assigned to this role can log into X Window, access web pages by using network browsers, and access media. They can also execute files in the /tmp and /home directories. Related booleans: user_r Has non-privileged access with full user permissions. Users assigned to this role can perform most actions that do not require administrative privileges. Related booleans: staff_r Has permissions similar to user_r and additional privileges. In particular, users assigned to this role are allowed to run sudo to execute administrative commands that are normally reserved for the root user. This changes roles and the effective user ID (EUID) but does not change the SELinux user. Related booleans: Additional resources To map a Linux user to staff_u and configure sudo , see Confining an administrator using sudo and the sysadm_r role . For additional information about each role and the associated types, see the relevant man pages installed with the selinux-policy-doc package: guest_selinux(8) , xguest_selinux(8) , user_selinux(8) , and staff_selinux(8) 3.4. Confined administrator roles in SELinux In SELinux, confined administrator roles grant specific sets of privileges and permissions for performing specific tasks to the Linux users assigned to them. By assigning separate confined administrator roles, you can divide the privileges over various domains of system administration to individual users. This is useful in scenarios with multiple administrators, each with a separate domain. You can assign these roles to SELinux users by using the semanage user command. SELinux has the following confined administrator roles: auditadm_r The audit administrator role allows managing processes related to the Audit subsystem. Related boolean: dbadm_r The database administrator role allows managing MariaDB and PostgreSQL databases. Related booleans: logadm_r The log administrator role allows managing logs, specifically, SELinux types related to the Rsyslog logging service and the Audit subsystem. Related boolean: webadm_r The web administrator allows managing the Apache HTTP Server. Related booleans: secadm_r The security administrator role allows managing the SELinux database. Related booleans: sysadm_r The system administrator role allows doing everything of the previously listed roles and has additional privileges. In non-default configurations, security administration can be separated from system administration by disabling the sysadm_secadm module in the SELinux policy. For detailed instructions, see Separating system administration from security administration in MLS . The sysadm_u user cannot log in directly using SSH. To enable SSH logins for sysadm_u , set the ssh_sysadm_login boolean to on : Related booleans: Additional resources To assign a Linux user to a confined administrator role, see Confining an administrator by mapping to sysadm_u . For additional information about each role, and the associated types, see the relevant man pages installed with the selinux-policy-doc package: auditadm_selinux(8) , dbadm_selinux (8) , logadm_selinux(8) , webadm_selinux(8) , secadm_selinux(8) , and sysadm_selinux(8) 3.5. Adding a new user automatically mapped to the SELinux unconfined_u user The following procedure demonstrates how to add a new Linux user to the system. The user is automatically mapped to the SELinux unconfined_u user. Prerequisites The root user is running unconfined, as it does by default in Red Hat Enterprise Linux. Procedure Enter the following command to create a new Linux user named <example_user> : To assign a password to the Linux <example_user> user: Log out of your current session. Log in as the Linux <example_user> user. When you log in, the pam_selinux PAM module automatically maps the Linux user to an SELinux user (in this case, unconfined_u ), and sets up the resulting SELinux context. The Linux user's shell is then launched with this context. Verification When logged in as the <example_user> user, check the context of a Linux user: Additional resources pam_selinux(8) man page on your system 3.6. Adding a new user as an SELinux-confined user Use the following steps to add a new SELinux-confined user to the system. This example procedure maps the user to the SELinux staff_u user right with the command for creating the user account. Prerequisites The root user is running unconfined, as it does by default in Red Hat Enterprise Linux. Procedure Enter the following command to create a new Linux user named <example_user> and map it to the SELinux staff_u user: To assign a password to the Linux <example_user> user: Log out of your current session. Log in as the Linux <example_user> user. The user's shell launches with the staff_u context. Verification When logged in as the <example_user> user, check the context of a Linux user: Additional resources pam_selinux(8) man page on your system 3.7. Confining regular users in SELinux You can confine all regular users on your system by mapping them to the user_u SELinux user. By default, all Linux users in Red Hat Enterprise Linux, including users with administrative privileges, are mapped to the unconfined SELinux user unconfined_u . You can improve the security of the system by assigning users to SELinux confined users. This is useful to conform with the V-71971 Security Technical Implementation Guide . Procedure Display the list of SELinux login records. The list displays the mappings of Linux users to SELinux users: # semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * Map the __default__ user, which represents all users without an explicit mapping, to the user_u SELinux user: # semanage login -m -s user_u -r s0 __default__ Verification Check that the __default__ user is mapped to the user_u SELinux user: # semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ user_u s0 * root unconfined_u s0-s0:c0.c1023 * Verify that the processes of a new user run in the user_u:user_r:user_t:s0 SELinux context. Create a new user: Define a password for <example_user> : Log out as root and log in as the new user. Show the security context for the user's ID: Show the security context of the user's current processes: [ <example_user> @localhost ~]USD ps axZ LABEL PID TTY STAT TIME COMMAND - 1 ? Ss 0:05 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 - 3729 ? S 0:00 (sd-pam) user_u:user_r:user_t:s0 3907 ? Ss 0:00 /usr/lib/systemd/systemd --user - 3911 ? S 0:00 (sd-pam) user_u:user_r:user_t:s0 3918 ? S 0:00 sshd: <example_user> @pts/0 user_u:user_r:user_t:s0 3922 pts/0 Ss 0:00 -bash user_u:user_r:user_dbusd_t:s0 3969 ? Ssl 0:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only user_u:user_r:user_t:s0 3971 pts/0 R+ 0:00 ps axZ 3.8. Confining an administrator by mapping to sysadm_u You can confine a user with administrative privileges by mapping the user directly to the sysadm_u SELinux user. When the user logs in, the session runs in the sysadm_u:sysadm_r:sysadm_t SELinux context. By default, all Linux users in Red Hat Enterprise Linux, including users with administrative privileges, are mapped to the unconfined SELinux user unconfined_u . You can improve the security of the system by assigning users to SELinux confined users. This is useful to conform with the V-71971 Security Technical Implementation Guide . Prerequisites The root user runs unconfined. This is the Red Hat Enterprise Linux default. Procedure Optional: To allow sysadm_u users to connect to the system by using SSH: Map a new or existing user to the sysadm_u SELinux user: To map a new user, add a new user to the wheel user group and map the user to the sysadm_u SELinux user: To map an existing user, add the user to the wheel user group and map the user to the sysadm_u SELinux user: Restore the context of the user's home directory: Verification Check that <example_user> is mapped to the sysadm_u SELinux user: Log in as <example_user> , for example, by using SSH, and show the user's security context: Switch to the root user: Verify that the security context remains unchanged: Try an administrative task, for example, restarting the sshd service: If there is no output, the command finished successfully. If the command does not finish successfully, it prints the following message: 3.9. Confining an administrator by using sudo and the sysadm_r role You can map a specific user with administrative privileges to the staff_u SELinux user, and configure sudo so that the user can gain the sysadm_r SELinux administrator role. This role allows the user to perform administrative tasks without SELinux denials. When the user logs in, the session runs in the staff_u:staff_r:staff_t SELinux context, but when the user enters a command by using sudo , the session changes to the staff_u:sysadm_r:sysadm_t context. By default, all Linux users in Red Hat Enterprise Linux, including users with administrative privileges, are mapped to the unconfined SELinux user unconfined_u . You can improve the security of the system by assigning users to SELinux confined users. This is useful to conform with the V-71971 Security Technical Implementation Guide . Prerequisites The root user runs unconfined. This is the Red Hat Enterprise Linux default. Procedure Map a new or existing user to the staff_u SELinux user: To map a new user, add a new user to the wheel user group and map the user to the staff_u SELinux user: To map an existing user, add the user to the wheel user group and map the user to the staff_u SELinux user: Restore the context of the user's home directory: To allow <example_user> to gain the SELinux administrator role, create a new file in the /etc/sudoers.d/ directory, for example: Add the following line to the new file: <example_user> ALL=(ALL) TYPE=sysadm_t ROLE=sysadm_r ALL Verification Check that <example_user> is mapped to the staff_u SELinux user: Log in as <example_user> , for example, using SSH, and switch to the root user: Show the root security context: Try an administrative task, for example, restarting the sshd service: If there is no output, the command finished successfully. If the command does not finish successfully, it prints the following message: 3.10. Additional resources unconfined_selinux(8) , user_selinux(8) , staff_selinux(8) , and sysadm_selinux(8) man pages installed with the selinux-policy-doc package. How to set up a system with SELinux confined users How to modify SELinux settings with booleans | [
"user_u:user_r:user_t",
"# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 *",
"default unconfined_u s0-s0:c0.c1023 *",
"seinfo -u Users: 8 guest_u root staff_u sysadm_u system_u unconfined_u user_u xguest_u",
"USD seinfo -r Roles: 14 auditadm_r dbadm_r guest_r logadm_r nx_server_r object_r secadm_r staff_r sysadm_r system_r unconfined_r user_r webadm_r xguest_r",
"semanage boolean -l SELinux boolean State Default Description ... xguest_connect_network (on , on) Allow xguest users to configure Network Manager and connect to apache ports xguest_exec_content (on , on) Allow xguest to exec content ...",
"SELinux boolean State Default Description guest_exec_content (on , on) Allow guest to exec content",
"SELinux boolean State Default Description xguest_connect_network (on , on) Allow xguest users to configure Network Manager and connect to apache ports xguest_exec_content (on , on) Allow xguest to exec content xguest_mount_media (on , on) Allow xguest users to mount removable media xguest_use_bluetooth (on , on) Allow xguest to use blue tooth devices",
"SELinux boolean State Default Description unprivuser_use_svirt (off , off) Allow unprivileged user to create and transition to svirt domains.",
"SELinux boolean State Default Description staff_exec_content (on , on) Allow staff to exec content staff_use_svirt (on , on) allow staff user to create and transition to svirt domains.",
"SELinux boolean State Default Description auditadm_exec_content (on , on) Allow auditadm to exec content",
"SELinux boolean State Default Description dbadm_exec_content (on , on) Allow dbadm to exec content dbadm_manage_user_files (off , off) Determine whether dbadm can manage generic user files. dbadm_read_user_files (off , off) Determine whether dbadm can read generic user files.",
"SELinux boolean State Default Description logadm_exec_content (on , on) Allow logadm to exec content",
"SELinux boolean State Default Description webadm_manage_user_files (off , off) Determine whether webadm can manage generic user files. webadm_read_user_files (off , off) Determine whether webadm can read generic user files.",
"SELinux boolean State Default Description secadm_exec_content (on , on) Allow secadm to exec content",
"setsebool -P ssh_sysadm_login on",
"SELinux boolean State Default Description ssh_sysadm_login (on , on) Allow ssh logins as sysadm_r:sysadm_t sysadm_exec_content (on , on) Allow sysadm to exec content xdm_sysadm_login (on , on) Allow the graphical login program to login directly as sysadm_r:sysadm_t",
"# useradd <example_user>",
"# passwd <example_user> Changing password for user <example_user> . New password: Retype new password: passwd: all authentication tokens updated successfully.",
"id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
"# useradd -Z staff_u <example_user>",
"# passwd <example_user> Changing password for user <example_user> . New password: Retype new password: passwd: all authentication tokens updated successfully.",
"id -Z uid=1000( <example_user> ) gid=1000( <example_user> ) groups=1000( <example_user> ) context=staff_u:staff_r:staff_t:s0-s0:c0.c1023",
"semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 *",
"semanage login -m -s user_u -r s0 __default__",
"semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ user_u s0 * root unconfined_u s0-s0:c0.c1023 *",
"adduser <example_user>",
"passwd <example_user>",
"[ <example_user> @localhost ~]USD id -Z user_u:user_r:user_t:s0",
"[ <example_user> @localhost ~]USD ps axZ LABEL PID TTY STAT TIME COMMAND - 1 ? Ss 0:05 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 - 3729 ? S 0:00 (sd-pam) user_u:user_r:user_t:s0 3907 ? Ss 0:00 /usr/lib/systemd/systemd --user - 3911 ? S 0:00 (sd-pam) user_u:user_r:user_t:s0 3918 ? S 0:00 sshd: <example_user> @pts/0 user_u:user_r:user_t:s0 3922 pts/0 Ss 0:00 -bash user_u:user_r:user_dbusd_t:s0 3969 ? Ssl 0:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only user_u:user_r:user_t:s0 3971 pts/0 R+ 0:00 ps axZ",
"setsebool -P ssh_sysadm_login on",
"adduser -G wheel -Z sysadm_u <example_user>",
"usermod -G wheel -Z sysadm_u <example_user>",
"restorecon -R -F -v /home/ <example_user>",
"semanage login -l | grep <example_user> <example_user> sysadm_u s0-s0:c0.c1023 *",
"[ <example_user> @localhost ~]USD id -Z sysadm_u:sysadm_r:sysadm_t:s0-s0:c0.c1023",
"sudo -i [sudo] password for <example_user> :",
"id -Z sysadm_u:sysadm_r:sysadm_t:s0-s0:c0.c1023",
"systemctl restart sshd",
"Failed to restart sshd.service: Access denied See system logs and 'systemctl status sshd.service' for details.",
"adduser -G wheel -Z staff_u <example_user>",
"usermod -G wheel -Z staff_u <example_user>",
"restorecon -R -F -v /home/ <example_user>",
"visudo -f /etc/sudoers.d/ <example_user>",
"<example_user> ALL=(ALL) TYPE=sysadm_t ROLE=sysadm_r ALL",
"semanage login -l | grep <example_user> <example_user> staff_u s0-s0:c0.c1023 *",
"[ <example_user> @localhost ~]USD sudo -i [sudo] password for <example_user> :",
"id -Z staff_u:sysadm_r:sysadm_t:s0-s0:c0.c1023",
"systemctl restart sshd",
"Failed to restart sshd.service: Access denied See system logs and 'systemctl status sshd.service' for details."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_selinux/managing-confined-and-unconfined-users_using-selinux |
10.5. Testing Enrollment | 10.5. Testing Enrollment For information on testing enrollment through the profiles, see Chapter 3, Making Rules for Issuing Certificates (Certificate Profiles) . To test whether end users can successfully enroll for a certificate using the authentication method set: Open the end-entities page. In the Enrollment tab, open the customized enrollment form. Fill in the values, and submit the request. Enter the password to the key database when prompted. When the correct password is entered, the client generates the key pair. Do not interrupt the key-generation process. Upon completion of the key generation, the request is submitted to the server to issue the certificate. The server subjects the request to the certificate profile and issues the certificate only if the request meets all the requirements. When the certificate is issued, install the certificate in the browser. Verify that the certificate is installed in the browser's certificate database. If PIN-based directory authentication was configured with PIN removal, re-enroll for another certificate using the same PIN. The request should be rejected. | [
"http s ://server.example.com: 8443/ca/ee/ca"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/testing_enrollment |
8.186. procps | 8.186. procps 8.186.1. RHBA-2014:1595 - procps bug fix and enhancement update Updated procps packages that fix two bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The procps packages contain a set of system utilities that provide system information. The procps packages include the following utilities: ps, free, skill, pkill, pgrep, snice, tload, top, uptime, vmstat, w, watch, and pwdx. Bug Fixes BZ# 950748 The /lib64/libproc.so development symbolic link was present in both the main procps package and its devel sub-package. This caused file conflicts when installing the devel sub-package. This update removes the duplicate symbolic link from the main package so that the devel sub-package can be installed without problems. BZ# 963799 The 'free' command always displayed zero in the 'shared' column as the procps-ng library was attempting to read from non-existent 'MemShared' field in /proc/meminfo file. With this update, the 'shared' column is reused for a value representing the 'MemShared' field, thus fixing this bug. This update also introduces a new '-a' option for the free command that enables a new column that represents a recently added field called 'MemAvailable'. The kernel does not export this field by default, so it needs to be explicitly enabled. Refer to the free(1) man page for more details. In addition, this update adds the following Enhancements BZ# 977467 Previously, only one configuration file could be passed to the 'sysctl' tool with the '-p' option. This update allows users to pass multiple configuration files with this option. As a result, users can perform shell expansion by using braces and wildcard characters. BZ# 1105125 With this update, the 'top' and 'watch' tools accept floating point numbers representing the polling or refresh intervals. Both widely used floating point separators ('.' and ',') can be applied, regardless of the locale settings in use. BZ# 1034337 This update introduces man pages for the openproc(), readproc() and readproctab() functions available in the libproc library. These manuals help writing applications that utilize the aforementioned functions. BZ# 1060681 This update introduces a new 'q' option (alternatively '-q' or '--quick-pid') to the 'ps' command. This option is essentially a speed-optimized enhancement of the 'p' option. The new option is recommended in cases where users only need to specify a list of PIDs to be shown, and do not need other selection and sorting options. BZ# 1011216 , BZ# 1082877 , BZ# 1089817 This update also enhances several man pages. Users of procps are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/procps |
Chapter 10. Configuring the chaining policy | Chapter 10. Configuring the chaining policy You can configure Directory Server to chain requests from client applications to Directory Server containing database links. Chaining policy applies to all database links created on Directory Server. 10.1. Chaining component operations A component is any functional unit in the server that uses internal operations, for example, a plug-in or function in the front end. Some components send internal LDAP requests to the server, expecting to access local data only. For such components, you must control the chaining policy so that the components can complete there operations successfully. For example, the certificate verification function. You can chain the LDAP request made by the function to check certificates that implies the remote server is trusted. If the remote server is not trusted, then there is a security problem. By default, you cannot chain all the internal operations and any component, but the default can be overridden. Additionally, you must create an ACI on the remote server to enable the specified plug-in to perform its operation on the remote server. The ACI must exist in the suffix assigned to database link. The following are component names, their potential side-effects of when you allow these components to chain internal operations, and the permissions the components need in the ACI on the remote server: The ACI plug-in component The ACI plug-in component implements access control. You cannot chain operations used to retrieve and update ACI attributes because it not safe to mix the local and the remote attributes. However, you can chain requests used to retrieve user entries by setting the following chaining components attribute: nsActiveChainingComponents: cn=ACI Plugin,cn=plugins,cn=config Permissions: Read, search, and compare. The resource limit component The resource limits component sets server limits depending on the user bind DN. If you chain the resource limitation component, you can apply resource limits on the remote users. To chain resource limit component operations, add the following chaining component attribute: nsActiveChainingComponents: cn=resource limits,cn=components,cn=config Permissions: Read, search, and compare. The certificate-based authentication component You can use the certificate-based authentication component during the external bind method.This component retrieves user certificates from the database on the remote server. When you allow this component to chain, it enables certificate-based authentication to work with the database link. To chain this component's operations, add the following chaining component attribute: nsActiveChainingComponents: cn=certificate-based authentication,cn=components,cn=config Permissions: Read, search, and compare. The password policy component The password policy component adds SASL binds to the remote server. Authenticating with a user name and password is essential for some forms of SASL authentication. When you enable the password policy, it allows the server to verify and implement the specific authentication method requested and to apply the appropriate password policies. To chain this component's operations, add the chaining component attribute: nsActiveChainingComponents: cn=password policy,cn=components,cn=config Permissions: Read, search, and compare. The SASL component The SASL component allows SASL to bind to the remote server. To chain this component's operations, add the chaining component attribute: nsActiveChainingComponents: cn=password policy,cn=components,cn=config Permissions: Read, search, and compare. The referential integrity postoperation component The referential integrity postoperation component propagates updates made to attributes containing DNs to the entries that contain pointers to the attributes. For example, you can automatically remove an entry from a group when group is deleted. By using the referential integrity postoperation plug-in together with the chaining simplifies the management of static group when the group members are remote to the static group definition. nsActiveChainingComponents: cn=referential integrity postoperation,cn=plugins,cn=config Permissions: Read, search, and compare. The attribute Uniqueness component The attribute Uniqueness component validates that all the values for a specified attribute are unique. When you chain the plug-in, it confirms that attribute values are unique even when attributes are changed through a database link. To chain this component's operations, add the chaining component attribute: nsActiveChainingComponents: cn=attribute uniqueness,cn=plugins,cn=config Permissions: Read, search, and compare. The roles component The roles component chains the roles and roles assignments for the entries in a database. When you chain this component, it maintains the roles even on chained databases. To chain this component's operations, addthe chaining component attribute: nsActiveChainingComponents: cn=roles,cn=components,cn=config Permissions: Read, search, and compare. Note You cannot chain Roles plug-in, Password policy component, Replication plug-in, and Referential Integrity plug-in components. When you enable the Referential Integrity plug-in on servers that issue chaining requests, ensure that you analyzed the performance, resource, time, and integrity needs. Not that integrity checks can be time-consuming and draining on memory and CPU. 10.2. Chaining component operations using the command line You can add a component allowed to chain by using the command line: Procedure Specify the components to include in chaining: Restart the instance: Create an ACI in the suffix on the remote server to which the operation will be chained: Verification Display the components allowed to chain: 10.3. Chaining component operations using the web console You can add a component allowed to chain by using the web console: Prerequisites You have opened the Directory Server user interface in the web console and selected the instance. Procedure Open the Database . In the navigation on the left, select the Chaining Configuration entry. Click the Add button below the components to Chain field . Select the component that you want to chain, and click Add & Save New Components . Create ACI in the suffix on the remote server to which the operation will be chained: Verification Selected component should be chained . | [
"nsActiveChainingComponents: cn=ACI Plugin,cn=plugins,cn=config",
"nsActiveChainingComponents: cn=resource limits,cn=components,cn=config",
"nsActiveChainingComponents: cn=certificate-based authentication,cn=components,cn=config",
"nsActiveChainingComponents: cn=password policy,cn=components,cn=config",
"nsActiveChainingComponents: cn=password policy,cn=components,cn=config",
"nsActiveChainingComponents: cn=referential integrity postoperation,cn=plugins,cn=config",
"nsActiveChainingComponents: cn=attribute uniqueness,cn=plugins,cn=config",
"nsActiveChainingComponents: cn=roles,cn=components,cn=config",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-set \\ --add-comp=\"cn=referential integrity postoperation,cn=components,cn=config\"",
"dsctl instance_name restart",
"ldapmodify -D \"cn=Directory Manager\" -W -H 389 remoteserver.example.com -x dn: ou=People,dc=example,dc=com changetype: modify add: aci aci: (targetattr = \"*\")(target=\"ldap:///ou=customers,ou=People,dc=example,dc=com\") (version 3.0; acl \"RefInt Access for chaining\"; allow (read,write,search,compare) userdn = \"ldap:///cn=referential integrity postoperation,cn=plugins,cn=config\";)",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-set \\ --add-comp=\"cn=referential integrity postoperation,cn=components,cn=config\"",
"ldapmodify -D \"cn=Directory Manager\" -W -H 389 remoteserver.example.com -x dn: ou=People,dc=example,dc=com changetype: modify add: aci aci: (targetattr = \"*\")(target=\"ldap:///ou=customers,ou=People,dc=example,dc=com\") (version 3.0; acl \"RefInt Access for chaining\"; allow (read,write,search,compare) userdn = \"ldap:///cn=referential integrity postoperation,cn=plugins,cn=config\";)"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_directory_databases/configuring-the-chaining-policy_configuring-directory-databases |
Registry | Registry OpenShift Container Platform 4.14 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>",
"topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule",
"topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local",
"oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"disableRedirect\":true}}'",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: swift: container: <container-id>",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF",
"bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF",
"bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')",
"AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry",
"route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF",
"oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge",
"oc policy add-role-to-user registry-viewer <user_name>",
"oc policy add-role-to-user registry-editor <user_name>",
"oc get nodes",
"oc debug nodes/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443",
"sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000",
"Login Succeeded!",
"sh-4.2# podman pull <name.io>/ <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>",
"podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }",
"pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)",
"steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh",
"steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")",
"#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json",
"oc import-image jenkins-agent-nodejs -n openshift",
"oc import-image jenkins-agent-maven -n openshift",
"oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/jenkins/index |
5.5. Controlling LVM Device Scans with Filters | 5.5. Controlling LVM Device Scans with Filters At startup, the vgscan command is run to scan the block devices on the system looking for LVM labels, to determine which of them are physical volumes and to read the metadata and build up a list of volume groups. The names of the physical volumes are stored in the LVM cache file of each node in the system, /etc/lvm/cache/.cache . Subsequent commands may read that file to avoiding rescanning. You can control which devices LVM scans by setting up filters in the lvm.conf configuration file. The filters in the lvm.conf file consist of a series of simple regular expressions that get applied to the device names that are in the /dev directory to decide whether to accept or reject each block device found. The following examples show the use of filters to control which devices LVM scans. Note that some of these examples do not necessarily represent best practice, as the regular expressions are matched freely against the complete pathname. For example, a/loop/ is equivalent to a/.*loop.*/ and would match /dev/solooperation/lvol1 . The following filter adds all discovered devices, which is the default behavior as there is no filter configured in the configuration file: The following filter removes the cdrom device in order to avoid delays if the drive contains no media: The following filter adds all loop and removes all other block devices: The following filter adds all loop and IDE and removes all other block devices: The following filter adds just partition 8 on the first IDE drive and removes all other block devices: Note When the lvmetad daemon is running, the filter = setting in the /etc/lvm/lvm.conf file does not apply when you execute the pvscan --cache device command. To filter devices, you need to use the global_filter = setting. Devices that fail the global filter are not opened by LVM and are never scanned. You may need to use a global filter, for example, when you use LVM devices in VMs and you do not want the contents of the devices in the VMs to be scanned by the physical host. For more information on the lvm.conf file, see Appendix B, The LVM Configuration Files and the lvm.conf (5) man page. | [
"filter = [ \"a/.*/\" ]",
"filter = [ \"r|/dev/cdrom|\" ]",
"filter = [ \"a/loop.*/\", \"r/.*/\" ]",
"filter =[ \"a|loop.*|\", \"a|/dev/hd.*|\", \"r|.*|\" ]",
"filter = [ \"a|^/dev/hda8USD|\", \"r/.*/\" ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_filters |
Security APIs | Security APIs OpenShift Container Platform 4.18 Reference guide for security APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_apis/index |
1.5. Common Exploits and Attacks | 1.5. Common Exploits and Attacks Table 1.1, "Common Exploits" details some of the most common exploits and entry points used by intruders to access organizational network resources. Key to these common exploits are the explanations of how they are performed and how administrators can properly safeguard their network against such attacks. Table 1.1. Common Exploits Exploit Description Notes Null or Default Passwords Leaving administrative passwords blank or using a default password set by the product vendor. This is most common in hardware such as routers and firewalls, but some services that run on Linux can contain default administrator passwords as well (though Red Hat Enterprise Linux 7 does not ship with them). Commonly associated with networking hardware such as routers, firewalls, VPNs, and network attached storage (NAS) appliances. Common in many legacy operating systems, especially those that bundle services (such as UNIX and Windows.) Administrators sometimes create privileged user accounts in a rush and leave the password null, creating a perfect entry point for malicious users who discover the account. Default Shared Keys Secure services sometimes package default security keys for development or evaluation testing purposes. If these keys are left unchanged and are placed in a production environment on the Internet, all users with the same default keys have access to that shared-key resource, and any sensitive information that it contains. Most common in wireless access points and preconfigured secure server appliances. IP Spoofing A remote machine acts as a node on your local network, finds vulnerabilities with your servers, and installs a backdoor program or Trojan horse to gain control over your network resources. Spoofing is quite difficult as it involves the attacker predicting TCP/IP sequence numbers to coordinate a connection to target systems, but several tools are available to assist crackers in performing such a vulnerability. Depends on target system running services (such as rsh , telnet , FTP and others) that use source-based authentication techniques, which are not recommended when compared to PKI or other forms of encrypted authentication used in ssh or SSL/TLS. Eavesdropping Collecting data that passes between two active nodes on a network by eavesdropping on the connection between the two nodes. This type of attack works mostly with plain text transmission protocols such as Telnet, FTP, and HTTP transfers. Remote attacker must have access to a compromised system on a LAN in order to perform such an attack; usually the cracker has used an active attack (such as IP spoofing or man-in-the-middle) to compromise a system on the LAN. Preventative measures include services with cryptographic key exchange, one-time passwords, or encrypted authentication to prevent password snooping; strong encryption during transmission is also advised. Service Vulnerabilities An attacker finds a flaw or loophole in a service run over the Internet; through this vulnerability, the attacker compromises the entire system and any data that it may hold, and could possibly compromise other systems on the network. HTTP-based services such as CGI are vulnerable to remote command execution and even interactive shell access. Even if the HTTP service runs as a non-privileged user such as "nobody", information such as configuration files and network maps can be read, or the attacker can start a denial of service attack which drains system resources or renders it unavailable to other users. Services sometimes can have vulnerabilities that go unnoticed during development and testing; these vulnerabilities (such as buffer overflows , where attackers crash a service using arbitrary values that fill the memory buffer of an application, giving the attacker an interactive command prompt from which they may execute arbitrary commands) can give complete administrative control to an attacker. Administrators should make sure that services do not run as the root user, and should stay vigilant of patches and errata updates for applications from vendors or security organizations such as CERT and CVE. Application Vulnerabilities Attackers find faults in desktop and workstation applications (such as email clients) and execute arbitrary code, implant Trojan horses for future compromise, or crash systems. Further exploitation can occur if the compromised workstation has administrative privileges on the rest of the network. Workstations and desktops are more prone to exploitation as workers do not have the expertise or experience to prevent or detect a compromise; it is imperative to inform individuals of the risks they are taking when they install unauthorized software or open unsolicited email attachments. Safeguards can be implemented such that email client software does not automatically open or execute attachments. Additionally, the automatic update of workstation software using Red Hat Network; or other system management services can alleviate the burdens of multi-seat security deployments. Denial of Service (DoS) Attacks Attacker or group of attackers coordinate against an organization's network or server resources by sending unauthorized packets to the target host (either server, router, or workstation). This forces the resource to become unavailable to legitimate users. The most reported DoS case in the US occurred in 2000. Several highly-trafficked commercial and government sites were rendered unavailable by a coordinated ping flood attack using several compromised systems with high bandwidth connections acting as zombies , or redirected broadcast nodes. Source packets are usually forged (as well as rebroadcast), making investigation as to the true source of the attack difficult. Advances in ingress filtering (IETF rfc2267) using iptables and Network Intrusion Detection Systems such as snort assist administrators in tracking down and preventing distributed DoS attacks. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-common_exploits_and_attacks |
10.2. Setting and Removing Cluster Properties | 10.2. Setting and Removing Cluster Properties To set the value of a cluster property, use the following pcs command. For example, to set the value of symmetric-cluster to false , use the following command. You can remove a cluster property from the configuration with the following command. Alternately, you can remove a cluster property from a configuration by leaving the value field of the pcs property set command blank. This restores that property to its default value. For example, if you have previously set the symmetric-cluster property to false , the following command removes the value you have set from the configuration and restores the value of symmetric-cluster to true , which is its default value. | [
"pcs property set property = value",
"pcs property set symmetric-cluster=false",
"pcs property unset property",
"pcs property set symmetic-cluster="
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-setremoveclusterprops-HAAR |
probe::signal.do_action.return | probe::signal.do_action.return Name probe::signal.do_action.return - Examining or changing a signal action completed Synopsis signal.do_action.return Values retstr Return value as a string name Name of the probe point | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-signal-do-action-return |
Chapter 6. Customizing the Learning Paths in Red Hat Developer Hub | Chapter 6. Customizing the Learning Paths in Red Hat Developer Hub In Red Hat Developer Hub, you can configure Learning Paths by passing the data into the app-config.yaml file as a proxy. The base URL must include the /developer-hub/learning-paths proxy. Note Due to the use of overlapping pathRewrites for both the learning-path and homepage quick access proxies, you must create the learning-paths configuration ( ^api/proxy/developer-hub/learning-paths ) before you create the homepage configuration ( ^/api/proxy/developer-hub ). For more information about customizing the Home page in Red Hat Developer Hub, see Customizing the Home page in Red Hat Developer Hub . You can provide data to the Learning Path from the following sources: JSON files hosted on GitHub or GitLab. A dedicated service that provides the Learning Path data in JSON format using an API. 6.1. Using hosted JSON files to provide data to the Learning Paths Prerequisites You have installed Red Hat Developer Hub by using either the Operator or Helm chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform . Procedure To access the data from the JSON files, complete the following step: Add the following code to the app-config.yaml file: proxy: endpoints: '/developer-hub': target: https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub/learning-paths': '/redhat-developer/rhdh/main/packages/app/public/learning-paths/data.json' '^/api/proxy/developer-hub/tech-radar': '/redhat-developer/rhdh/main/packages/app/public/tech-radar/data-default.json' '^/api/proxy/developer-hub': '/redhat-developer/rhdh/main/packages/app/public/homepage/data.json' changeOrigin: true secure: true 6.2. Using a dedicated service to provide data to the Learning Paths When using a dedicated service, you can do the following: Use the same service to provide the data to all configurable Developer Hub pages or use a different service for each page. Use the red-hat-developer-hub-customization-provider as an example service, which provides data for both the Home and Tech Radar pages. The red-hat-developer-hub-customization-provider service provides the same data as default Developer Hub data. You can fork the red-hat-developer-hub-customization-provider service repository from GitHub and modify it with your own data, if required. Deploy the red-hat-developer-hub-customization-provider service and the Developer Hub Helm chart on the same cluster. Prerequisites You have installed the Red Hat Developer Hub using Helm chart. For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform . Procedure To use a dedicated service to provide the Learning Path data, complete the following steps: Add the following code to the app-config-rhdh.yaml file: proxy: endpoints: # Other Proxies '/developer-hub/learning-paths': target: USD{LEARNING_PATH_DATA_URL} changeOrigin: true # Change to "false" in case of using self hosted cluster with a self-signed certificate secure: true where the LEARNING_PATH_DATA_URL is defined as http://<SERVICE_NAME>/learning-paths , for example, http://rhdh-customization-provider/learning-paths . Note You can define the LEARNING_PATH_DATA_URL by adding it to rhdh-secrets or by directly replacing it with its value in your custom ConfigMap. Delete the Developer Hub pod to ensure that the new configurations are loaded correctly. | [
"proxy: endpoints: '/developer-hub': target: https://raw.githubusercontent.com/ pathRewrite: '^/api/proxy/developer-hub/learning-paths': '/redhat-developer/rhdh/main/packages/app/public/learning-paths/data.json' '^/api/proxy/developer-hub/tech-radar': '/redhat-developer/rhdh/main/packages/app/public/tech-radar/data-default.json' '^/api/proxy/developer-hub': '/redhat-developer/rhdh/main/packages/app/public/homepage/data.json' changeOrigin: true secure: true",
"proxy: endpoints: # Other Proxies '/developer-hub/learning-paths': target: USD{LEARNING_PATH_DATA_URL} changeOrigin: true # Change to \"false\" in case of using self hosted cluster with a self-signed certificate secure: true"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/getting_started_with_red_hat_developer_hub/proc-customize-rhdh-learning-paths_rhdh-getting-started |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_ibm_cloud/making-open-source-more-inclusive |
Chapter 3. Mirroring images for a disconnected installation | Chapter 3. Mirroring images for a disconnected installation You can ensure your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place your mirror registry on a mirror host that has access to both your network and the internet. If you do not have access to a mirror host, use the Mirroring Operator catalogs for use with disconnected clusters procedure to copy images to a device you can move across network boundaries with. 3.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Red Hat Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift . The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations. 3.2. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information about viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.3. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 3.3.1. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.4. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that enables you to mirror images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 Specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.5. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Important Running oc image mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" \ --insecure=true 1 1 Optional: If you do not want to configure trust for the target registry, add the --insecure=true flag. If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.6. The Cluster Samples Operator in a disconnected environment In a disconnected environment, you must take additional steps after you install a cluster to configure the Cluster Samples Operator. Review the following information in preparation. 3.6.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 3.7. Mirroring Operator catalogs for use with disconnected clusters You can mirror the Operator contents of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2 . For a cluster on a restricted network, this registry can be one that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. Running oc adm catalog mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . The oc adm catalog mirror command also automatically mirrors the index image that is specified during the mirroring process, whether it be a Red Hat-provided index image or your own custom-built index image, to the target registry. You can then use the mirrored index image to create a catalog source that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster. Additional resources Using Operator Lifecycle Manager on restricted networks 3.7.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters has the following prerequisites: Workstation with unrestricted network access. podman version 1.9.3 or later. If you want to filter, or prune , an existing catalog and selectively mirror only a subset of Operators, see the following sections: Installing the opm CLI Updating or filtering a file-based catalog image If you want to mirror a Red Hat-provided catalog, run the following command on your workstation with unrestricted network access to authenticate with registry.redhat.io : USD podman login registry.redhat.io Access to a mirror registry that supports Docker v2-2 . On your mirror registry, decide which repository, or namespace, to use for storing mirrored Operator content. For example, you might create an olm-mirror repository. If your mirror registry does not have internet access, connect removable media to your workstation with unrestricted network access. If you are working with private registries, including registry.redhat.io , set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI: USD REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json 3.7.2. Extracting and mirroring catalog contents The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. The default behavior of the command generates manifests, then automatically mirrors all of the image content from the index image, as well as the index image itself, to your mirror registry. Alternatively, if your mirror registry is on a completely disconnected, or airgapped , host, you can first mirror the content to removable media, move the media to the disconnected environment, then mirror the content from the media to the registry. 3.7.2.1. Mirroring catalog contents to registries on the same network If your mirror registry is co-located on the same network as your workstation with unrestricted network access, take the following actions on your workstation. Procedure If your mirror registry requires authentication, run the following command to log in to the registry: USD podman login <mirror_registry> Run the following command to extract and mirror the content to the mirror registry: USD oc adm catalog mirror \ <index_image> \ 1 <mirror_registry>:<port>[/<repository>] \ 2 [-a USD{REG_CREDS}] \ 3 [--insecure] \ 4 [--index-filter-by-os='<platform>/<arch>'] \ 5 [--manifests-only] 6 1 Specify the index image for the catalog that you want to mirror. 2 Specify the fully qualified domain name (FQDN) for the target registry to mirror the Operator contents to. The mirror registry <repository> can be any existing repository, or namespace, on the registry, for example olm-mirror as outlined in the prerequisites. If there is an existing repository found during mirroring, the repository name is added to the resulting image name. If you do not want the image name to include the repository name, omit the <repository> value from this line, for example <mirror_registry>:<port> . 3 Optional: If required, specify the location of your registry credentials file. {REG_CREDS} is required for registry.redhat.io . 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are passed as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 . 6 Optional: Generate only the manifests required for mirroring without actually mirroring the image content to a registry. This option can be useful for reviewing what will be mirrored, and lets you make any changes to the mapping list, if you require only a subset of packages. You can then use the mapping.txt file with the oc image mirror command to mirror the modified list of images in a later step. This flag is intended for only advanced selective mirroring of content from the catalog. Example output src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 ... wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2 1 Directory for the temporary index.db database generated by the command. 2 Record the manifests directory name that is generated. This directory is referenced in subsequent procedures. Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution. Additional resources Architecture and operating system support for Operators 3.7.2.2. Mirroring catalog contents to airgapped registries If your mirror registry is on a completely disconnected, or airgapped, host, take the following actions. Procedure Run the following command on your workstation with unrestricted network access to mirror the content to local files: USD oc adm catalog mirror \ <index_image> \ 1 file:///local/index \ 2 -a USD{REG_CREDS} \ 3 --insecure \ 4 --index-filter-by-os='<platform>/<arch>' 5 1 Specify the index image for the catalog that you want to mirror. 2 Specify the content to mirror to local files in your current directory. 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 , and .* Example output ... info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2 1 Record the manifests directory name that is generated. This directory is referenced in subsequent procedures. 2 Record the expanded file:// path that is based on your provided index image. This path is referenced in a subsequent step. This command creates a v2/ directory in your current directory. Copy the v2/ directory to removable media. Physically remove the media and attach it to a host in the disconnected environment that has access to the mirror registry. If your mirror registry requires authentication, run the following command on your host in the disconnected environment to log in to the registry: USD podman login <mirror_registry> Run the following command from the parent directory containing the v2/ directory to upload the images from local files to the mirror registry: USD oc adm catalog mirror \ file://local/index/<repository>/<index_image>:<tag> \ 1 <mirror_registry>:<port>[/<repository>] \ 2 -a USD{REG_CREDS} \ 3 --insecure \ 4 --index-filter-by-os='<platform>/<arch>' 5 1 Specify the file:// path from the command output. 2 Specify the fully qualified domain name (FQDN) for the target registry to mirror the Operator contents to. The mirror registry <repository> can be any existing repository, or namespace, on the registry, for example olm-mirror as outlined in the prerequisites. If there is an existing repository found during mirroring, the repository name is added to the resulting image name. If you do not want the image name to include the repository name, omit the <repository> value from this line, for example <mirror_registry>:<port> . 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , linux/arm64 , and .* Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution. Run the oc adm catalog mirror command again. Use the newly mirrored index image as the source and the same mirror registry target used in the step: USD oc adm catalog mirror \ <mirror_registry>:<port>/<index_image> \ <mirror_registry>:<port>[/<repository>] \ --manifests-only \ 1 [-a USD{REG_CREDS}] \ [--insecure] 1 The --manifests-only flag is required for this step so that the command does not copy all of the mirrored content again. Important This step is required because the image mappings in the imageContentSourcePolicy.yaml file generated during the step must be updated from local paths to valid mirror locations. Failure to do so will cause errors when you create the ImageContentSourcePolicy object in a later step. After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to enable installation of Operators from OperatorHub. Additional resources Architecture and operating system support for Operators 3.7.3. Generated manifests After mirroring Operator catalog content to your mirror registry, a manifests directory is generated in your current directory. If you mirrored content to a registry on the same network, the directory name takes the following pattern: manifests-<index_image_name>-<random_number> If you mirrored content to a registry on a disconnected host in the section, the directory name takes the following pattern: manifests-index/<repository>/<index_image_name>-<random_number> Note The manifests directory name is referenced in subsequent procedures. The manifests directory contains the following files, some of which might require further modification: The catalogSource.yaml file is a basic definition for a CatalogSource object that is pre-populated with your index image tag and other relevant metadata. This file can be used as is or modified to add the catalog source to your cluster. Important If you mirrored the content to local files, you must modify your catalogSource.yaml file to remove any backslash ( / ) characters from the metadata.name field. Otherwise, when you attempt to create the object, it fails with an "invalid resource name" error. The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration. Important If you used the --manifests-only flag during the mirroring process and want to further trim the subset of packages to mirror, see the steps in the Mirroring a package manifest format catalog image procedure of the OpenShift Container Platform 4.7 documentation about modifying your mapping.txt file and using the file with the oc image mirror command. 3.7.4. Postinstallation requirements After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to populate and enable installation of Operators from OperatorHub. Additional resources Populating OperatorHub from mirrored Operator catalogs Updating or filtering a file-based catalog image 3.8. steps Install a cluster on infrastructure that you provision in your restricted network, such as on VMware vSphere , bare metal , or Amazon Web Services . 3.9. Additional resources See Gathering data about specific features for more information about using must-gather. | [
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=",
"\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },",
"{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\" --insecure=true 1",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-install",
"podman login registry.redhat.io",
"REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json",
"podman login <mirror_registry>",
"oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6",
"src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2",
"oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5",
"info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2",
"podman login <mirror_registry>",
"oc adm catalog mirror file://local/index/<repository>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5",
"oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>[/<repository>] --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]",
"manifests-<index_image_name>-<random_number>",
"manifests-index/<repository>/<index_image_name>-<random_number>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/disconnected_installation_mirroring/installing-mirroring-installation-images |
37.4.2. Updating the R/W state of a multipath device | 37.4.2. Updating the R/W state of a multipath device If multipathing is enabled, after rescanning the logical unit, the change in its state will need to be reflected in the logical unit's corresponding multipath drive. Do this by reloading the multipath device maps with the following command: The multipath -ll command can then be used to confirm the change. | [
"multipath -r"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch37s04s02 |
4.129. libcap | 4.129. libcap 4.129.1. RHSA-2011:1694 - Low: libcap security and bug fix update Updated libcap packages that fix one security issue and one bug are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The libcap packages provide a library and tools for getting and setting POSIX capabilities. Security Fix CVE-2011-4099 It was found that capsh did not change into the new root when using the "--chroot" option. An application started via the "capsh --chroot" command could use this flaw to escape the chroot restrictions. Bug Fix BZ# 730957 Previously, the libcap packages did not contain the capsh(1) manual page. With this update, the capsh(1) manual page is included. All libcap users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libcap |
Chapter 18. Red Hat Software Collections | Chapter 18. Red Hat Software Collections Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set allows for optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can pick choose at any time which package version they want to run. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . Red Hat Developer Toolset is now a part of Red Hat Software Collections, included as a separate Software Collection. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides the current versions of the GNU Compiler Collection, GNU Debugger, Eclipse development platform, and other development, debugging, and performance monitoring tools. See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-red_hat_software_collections |
1.3. Cluster Infrastructure | 1.3. Cluster Infrastructure The Red Hat Cluster Suite cluster infrastructure provides the basic functions for a group of computers (called nodes or members ) to work together as a cluster. Once a cluster is formed using the cluster infrastructure, you can use other Red Hat Cluster Suite components to suit your clustering needs (for example, setting up a cluster for sharing files on a GFS file system or setting up service failover). The cluster infrastructure performs the following functions: Cluster management Lock management Fencing Cluster configuration management 1.3.1. Cluster Management Cluster management manages cluster quorum and cluster membership. One of the following Red Hat Cluster Suite components performs cluster management: CMAN (an abbreviation for cluster manager) or GULM (Grand Unified Lock Manager). CMAN operates as the cluster manager if a cluster is configured to use DLM (Distributed Lock Manager) as the lock manager. GULM operates as the cluster manager if a cluster is configured to use GULM as the lock manager. The major difference between the two cluster managers is that CMAN is a distributed cluster manager and GULM is a client-server cluster manager. CMAN runs in each cluster node; cluster management is distributed across all nodes in the cluster (refer to Figure 1.2, "CMAN/DLM Overview" ). GULM runs in nodes designated as GULM server nodes; cluster management is centralized in the nodes designated as GULM server nodes (refer to Figure 1.3, "GULM Overview" ). GULM server nodes manage the cluster through GULM clients in the cluster nodes. With GULM, cluster management operates in a limited number of nodes: either one, three, or five nodes configured as GULM servers. The cluster manager keeps track of cluster quorum by monitoring the count of cluster nodes that run cluster manager. (In a CMAN cluster, all cluster nodes run cluster manager; in a GULM cluster only the GULM servers run cluster manager.) If more than half the nodes that run cluster manager are active, the cluster has quorum. If half the nodes that run cluster manager (or fewer) are active, the cluster does not have quorum, and all cluster activity is stopped. Cluster quorum prevents the occurrence of a "split-brain" condition - a condition where two instances of the same cluster are running. A split-brain condition would allow each cluster instance to access cluster resources without knowledge of the other cluster instance, resulting in corrupted cluster integrity. In a CMAN cluster, quorum is determined by communication of heartbeats among cluster nodes via Ethernet. Optionally, quorum can be determined by a combination of communicating heartbeats via Ethernet and through a quorum disk. For quorum via Ethernet, quorum consists of 50 percent of the node votes plus 1. For quorum via quorum disk, quorum consists of user-specified conditions. Note In a CMAN cluster, by default each node has one quorum vote for establishing quorum. Optionally, you can configure each node to have more than one vote. In a GULM cluster, the quorum consists of a majority of nodes designated as GULM servers according to the number of GULM servers configured: Configured with one GULM server - Quorum equals one GULM server. Configured with three GULM servers - Quorum equals two GULM servers. Configured with five GULM servers - Quorum equals three GULM servers. The cluster manager keeps track of membership by monitoring heartbeat messages from other cluster nodes. When cluster membership changes, the cluster manager notifies the other infrastructure components, which then take appropriate action. For example, if node A joins a cluster and mounts a GFS file system that nodes B and C have already mounted, then an additional journal and lock management is required for node A to use that GFS file system. If a cluster node does not transmit a heartbeat message within a prescribed amount of time, the cluster manager removes the node from the cluster and communicates to other cluster infrastructure components that the node is not a member. Again, other cluster infrastructure components determine what actions to take upon notification that node is no longer a cluster member. For example, Fencing would fence the node that is no longer a member. Figure 1.2. CMAN/DLM Overview Figure 1.3. GULM Overview | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s1-hasci-overview-CSO |
Chapter 12. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] | Chapter 12. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] Description VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. status object status represents the current information of a snapshot. 12.1.1. .spec Description spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. Type object Required deletionPolicy driver source volumeSnapshotRef Property Type Description deletionPolicy string deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. driver string driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. source object source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. sourceVolumeMode string SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either "Filesystem" or "Block". If not specified, it indicates the source volume's mode is unknown. This field is immutable. This field is an alpha field. volumeSnapshotClassName string name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. volumeSnapshotRef object volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. 12.1.2. .spec.source Description source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. Type object Property Type Description snapshotHandle string snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. volumeHandle string volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable. 12.1.3. .spec.volumeSnapshotRef Description volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 12.1.4. .status Description status represents the current information of a snapshot. Type object Property Type Description creationTime integer creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command date +%s%N returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. error object error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. readyToUse boolean readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. snapshotHandle string snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. 12.1.5. .status.error Description error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 12.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents DELETE : delete collection of VolumeSnapshotContent GET : list objects of kind VolumeSnapshotContent POST : create a VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} DELETE : delete a VolumeSnapshotContent GET : read the specified VolumeSnapshotContent PATCH : partially update the specified VolumeSnapshotContent PUT : replace the specified VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status GET : read status of the specified VolumeSnapshotContent PATCH : partially update status of the specified VolumeSnapshotContent PUT : replace status of the specified VolumeSnapshotContent 12.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents Table 12.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of VolumeSnapshotContent Table 12.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotContent Table 12.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContentList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotContent Table 12.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.7. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 12.8. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 202 - Accepted VolumeSnapshotContent schema 401 - Unauthorized Empty 12.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} Table 12.9. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent Table 12.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a VolumeSnapshotContent Table 12.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 12.12. Body parameters Parameter Type Description body DeleteOptions schema Table 12.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotContent Table 12.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.15. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotContent Table 12.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.17. Body parameters Parameter Type Description body Patch schema Table 12.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotContent Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty 12.2.3. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status Table 12.22. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent Table 12.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified VolumeSnapshotContent Table 12.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.25. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshotContent Table 12.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.27. Body parameters Parameter Type Description body Patch schema Table 12.28. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshotContent Table 12.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.30. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 12.31. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage_apis/volumesnapshotcontent-snapshot-storage-k8s-io-v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.