title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Adding secrets to GitHub Actions for secure integration with external tools
|
Chapter 1. Adding secrets to GitHub Actions for secure integration with external tools Prerequisites Before you configure GitHub Actions, ensure you have the following: Admin access to your GitHub repository and CI/CD settings. Container registry credentials for pulling container images from Quay.io, JFrog Artifactory, or Sonatype Nexus. Authentication details for specific GitHub Actions tasks: For ACS security tasks : ROX Central server endpoint ROX API token For SBOM and artifact signing tasks : Cosign signing key password Private key and public key Trustification URL Client ID and secret Supported CycloneDX version Note The credentials and other details are already Base64-encoded, so you do not need to encode them again. You can find these credentials in your private.env file, which you created during RHTAP installation. 1.1. Adding secrets to GitHub Actions using UI Procedure Log in to GitHub and navigate to your source repository. Go to the Settings tab. In the left navigation pane, select Secrets and variables , then select Actions . Enter the following details: Select New repository secret . In the Name field, enter MY_GITHUB_TOKEN . In the Secret field, enter the token associated with your GitHub account. Repeat steps 3-4 to add the required variables: Variable Description Provide image registry credentials for only one image registry. QUAY_IO_CREDS_USR Username for accessing Quay.io repository. QUAY_IO_CREDS_PSW Password for accessing Quay.io repository. ARTIFACTORY_IO_CREDS_USR Username for accessing JFrog Artifactory repository. ARTIFACTORY_IO_CREDS_PSW Password for accessing JFrog Artifactory repository. NEXUS_IO_CREDS_USR Username for accessing Sonatype Nexus repository. NEXUS_IO_CREDS_PSW Password for accessing Sonatype Nexus repository. Set these variables if GitHub Actions runners do not run on the same cluster as the RHTAP instance. REKOR_HOST URL of your Rekor server. TUF_MIRROR URL of your TUF service. GitOps configuration for GitHub GITOPS_AUTH_PASSWORD The token the system uses to update the GitOps repository for newly built images. GITOPS_AUTH_USERNAME (optional) The parameter required for Jenkins to work with GitHub. You also need to uncomment a line with this parameter in a Jenkinsfile: GITOPS_AUTH_USERNAME = credentials('GITOPS_AUTH_USERNAME'). By default, this line is commented out. Variable required for ACS tasks. ROX_CENTRAL_ENDPOINT Endpoint for the ROX Central server. ROX_API_TOKEN API token for accessing the ROX server. Variables required for SBOM tasks. COSIGN_SECRET_PASSWORD Password for Cosign signing key. COSIGN_SECRET_KEY Private key for Cosign. COSIGN_PUBLIC_KEY Public key for Cosign. TRUSTIFICATION_BOMBASTIC_API_URL URL for Trustification Bombastic API used in SBOM generation. TRUSTIFICATION_OIDC_ISSUER_URL OIDC issuer URL used for authentication when interacting with the Trustification Bombastic API. TRUSTIFICATION_OIDC_CLIENT_ID Client ID for authenticating to the Trustification Bombastic API using OIDC. TRUSTIFICATION_OIDC_CLIENT_SECRET Client secret used alongside the client ID to authenticate to the Trustification Bombastic API. TRUSTIFICATION_SUPPORTED_CYCLONEDX_VERSION Specifies the CycloneDX SBOM version that is supported and generated by the system. Select Add secret . Rerun the last pipeline run to verify the secrets are applied correctly. Alternatively, switch to you application's source repository in GitHub, make a minor change, and commit it to trigger a new pipeline run. 1.2. Adding secrets to GitHub using CLI Procedure Create a project with two files in your preferred text editor, such as Visual Studio Code: env_vars.sh ghub-set-vars Update the env_vars.sh file with the following environment variables: # env_vars.sh # GitHub credentials export MY_GITHUB_TOKEN="your_github_token_here" export MY_GITHUB_USER="your_github_username_here" export GITOPS_AUTH_PASSWORD="your_OpenShift_GitOps_password_here" export GITOPS_AUTH_USERNAME="your_OpenShift_GitOps_username_here" // Provide the credentials for the image registry you use. # Quay.io credentials export QUAY_IO_CREDS_USR="your_quay_username_here" export QUAY_IO_CREDS_PSW="your_quay_password_here" # JFrog Artifactory credenditals export ARTIFACTORY_IO_CREDS_USR="your_artifactory_username_here" export ARTIFACTORY_IO_CREDS_PSW="your_artifactory_password_here" # Sonatype Nexus credentials export NEXUS_IO_CREDS_USR="your_nexus_username_here" export NEXUS_IO_CREDS_PSW="your_nexus_password_here" # Rekor and TUF routes export REKOR_HOST="your rekor server url here" export TUF_MIRROR="your tuf service url here" // Variables required for ACS tasks # ROX variables export ROX_CENTRAL_ENDPOINT="your_rox_central_endpoint_here" export ROX_API_TOKEN="your_rox_api_token_here" // Set these variables if GitHub Actions runners do not run on the same cluster as the {ProductShortName} instance. export ROX_CENTRAL_ENDPOINT="your_rox_central_endpoint_here" export ROX_API_TOKEN="your_rox_api_token_here" // Variables required for SBOM tasks. # Cosign secrets export COSIGN_SECRET_PASSWORD="your_cosign_secret_password_here" export COSIGN_SECRET_KEY="your_cosign_secret_key_here" export COSIGN_PUBLIC_KEY="your_cosign_public_key_here" # Trustification credentials export TRUSTIFICATION_BOMBASTIC_API_URL="your__BOMBASTIC_API_URL_here" export TRUSTIFICATION_OIDC_ISSUER_URL="your_OIDC_ISSUER_URL_here" export TRUSTIFICATION_OIDC_CLIENT_ID="your_OIDC_CLIENT_ID_here" export TRUSTIFICATION_OIDC_CLIENT_SECRET="your_OIDC_CLIENT_SECRET_here" export TRUSTIFICATION_SUPPORTED_CYCLONEDX_VERSION="your_SUPPORTED_CYCLONEDX_VERSION_here" Update the ghub-set-vars file with the following information: #!/bin/bash SCRIPTDIR="USD(cd "USD(dirname "USD{BASH_SOURCE[0]}")" > /dev/null 2>&1 && pwd)" if [ USD# -ne 1 ]; then echo "Missing param, provide gitlab repo name" echo "Note: This script uses MY_GITHUB_TOKEN and MY_GITHUB_USER env vars" exit fi REPO=USD1 HEADER="PRIVATE-TOKEN: USDMY_GITHUB_TOKEN" URL=https://github.com/api/v4/projects # Look up the project ID so we can use it below PID=USD(curl -s -L --header "USDHEADER" "USDURL/USDMY_GITHUB_USER%2FUSDREPO" | jq ".id") function setVars() { NAME=USD1 VALUE=USD2 MASKED=USD{3:-true} echo "setting USDNAME in https://github.com/USDMY_GITHUB_USER/USDREPO" # Delete first because if the secret already exists then its value # won't be changed by the POST below curl -s --request DELETE --header "USDHEADER" "USDURL/USDPID/variables/USDNAME" # Set the new key/value curl -s --request POST --header "USDHEADER" "USDURL/USDPID/variables" \ --form "key=USDNAME" --form "value=USDVALUE" --form "masked=USDMASKED" | jq } setVars ROX_CENTRAL_ENDPOINT USDROX_CENTRAL_ENDPOINT setVars ROX_API_TOKEN USDROX_API_TOKEN setVars GITOPS_AUTH_PASSWORD USDMY_GITLAB_TOKEN setVars GITOPS_AUTH_USERNAME USDMY_GITLAB_USER setVars QUAY_IO_CREDS_USR USDQUAY_IO_CREDS_USR setVars QUAY_IO_CREDS_PSW USDQUAY_IO_CREDS_PSW setVars COSIGN_SECRET_PASSWORD USDCOSIGN_SECRET_PASSWORD setVars COSIGN_SECRET_KEY USDCOSIGN_SECRET_KEY setVars COSIGN_PUBLIC_KEY USDCOSIGN_PUBLIC_KEY setVars TRUSTIFICATION_BOMBASTIC_API_URL "USDTRUSTIFICATION_BOMBASTIC_API_URL" setVars TRUSTIFICATION_OIDC_ISSUER_URL "USDTRUSTIFICATION_OIDC_ISSUER_URL" setVars TRUSTIFICATION_OIDC_CLIENT_ID "USDTRUSTIFICATION_OIDC_CLIENT_ID" setVars TRUSTIFICATION_OIDC_CLIENT_SECRET "USDTRUSTIFICATION_OIDC_CLIENT_SECRET" setVars TRUSTIFICATION_SUPPORTED_CYCLONEDX_VERSION "USDTRUSTIFICATION_SUPPORTED_CYCLONEDX_VERSION" setVars ARTIFACTORY_IO_CREDS_USR USDARTIFACTORY_IO_CREDS_USR setVars ARTIFACTORY_IO_CREDS_PSW USDARTIFACTORY_IO_CREDS_PSW setVars NEXUS_IO_CREDS_USR USDNEXUS_IO_CREDS_USR setVars NEXUS_IO_CREDS_PSW USDNEXUS_IO_CREDS_PSW setVars REKOR_HOST USDREKOR_HOST setVars TUF_MIRROR USDTUF_MIRROR (Optional) Modify the ghub-set-vars file to disable variables that are not required. For example, to disable setVars ROX_API_TOKEN USDROX_API_TOKEN , add false to it. ROX_API_TOKEN USDROX_API_TOKEN false Load the environment variables into your current shell session: source env_vars.sh Make the ghub-set-vars script executable, and run it with your repository name to set the variables in your GitHub repository. chmod +x ghub-set-vars ./ghub-set-vars your_repository_name Rerun the last pipeline run to verify the secrets are applied correctly. Alternatively, switch to you application's source repository in GitLab, make a minor change, and commit it to trigger a new pipeline run. Revised on 2025-02-12 15:08:56 UTC
|
[
"env_vars.sh GitHub credentials export MY_GITHUB_TOKEN=\"your_github_token_here\" export MY_GITHUB_USER=\"your_github_username_here\" export GITOPS_AUTH_PASSWORD=\"your_OpenShift_GitOps_password_here\" export GITOPS_AUTH_USERNAME=\"your_OpenShift_GitOps_username_here\" // Provide the credentials for the image registry you use. Quay.io credentials export QUAY_IO_CREDS_USR=\"your_quay_username_here\" export QUAY_IO_CREDS_PSW=\"your_quay_password_here\" JFrog Artifactory credenditals export ARTIFACTORY_IO_CREDS_USR=\"your_artifactory_username_here\" export ARTIFACTORY_IO_CREDS_PSW=\"your_artifactory_password_here\" Sonatype Nexus credentials export NEXUS_IO_CREDS_USR=\"your_nexus_username_here\" export NEXUS_IO_CREDS_PSW=\"your_nexus_password_here\" Rekor and TUF routes export REKOR_HOST=\"your rekor server url here\" export TUF_MIRROR=\"your tuf service url here\" // Variables required for ACS tasks ROX variables export ROX_CENTRAL_ENDPOINT=\"your_rox_central_endpoint_here\" export ROX_API_TOKEN=\"your_rox_api_token_here\" // Set these variables if GitHub Actions runners do not run on the same cluster as the {ProductShortName} instance. export ROX_CENTRAL_ENDPOINT=\"your_rox_central_endpoint_here\" export ROX_API_TOKEN=\"your_rox_api_token_here\" // Variables required for SBOM tasks. Cosign secrets export COSIGN_SECRET_PASSWORD=\"your_cosign_secret_password_here\" export COSIGN_SECRET_KEY=\"your_cosign_secret_key_here\" export COSIGN_PUBLIC_KEY=\"your_cosign_public_key_here\" Trustification credentials export TRUSTIFICATION_BOMBASTIC_API_URL=\"your__BOMBASTIC_API_URL_here\" export TRUSTIFICATION_OIDC_ISSUER_URL=\"your_OIDC_ISSUER_URL_here\" export TRUSTIFICATION_OIDC_CLIENT_ID=\"your_OIDC_CLIENT_ID_here\" export TRUSTIFICATION_OIDC_CLIENT_SECRET=\"your_OIDC_CLIENT_SECRET_here\" export TRUSTIFICATION_SUPPORTED_CYCLONEDX_VERSION=\"your_SUPPORTED_CYCLONEDX_VERSION_here\"",
"#!/bin/bash SCRIPTDIR=\"USD(cd \"USD(dirname \"USD{BASH_SOURCE[0]}\")\" > /dev/null 2>&1 && pwd)\" if [ USD# -ne 1 ]; then echo \"Missing param, provide gitlab repo name\" echo \"Note: This script uses MY_GITHUB_TOKEN and MY_GITHUB_USER env vars\" exit fi REPO=USD1 HEADER=\"PRIVATE-TOKEN: USDMY_GITHUB_TOKEN\" URL=https://github.com/api/v4/projects Look up the project ID so we can use it below PID=USD(curl -s -L --header \"USDHEADER\" \"USDURL/USDMY_GITHUB_USER%2FUSDREPO\" | jq \".id\") function setVars() { NAME=USD1 VALUE=USD2 MASKED=USD{3:-true} echo \"setting USDNAME in https://github.com/USDMY_GITHUB_USER/USDREPO\" # Delete first because if the secret already exists then its value # won't be changed by the POST below curl -s --request DELETE --header \"USDHEADER\" \"USDURL/USDPID/variables/USDNAME\" # Set the new key/value curl -s --request POST --header \"USDHEADER\" \"USDURL/USDPID/variables\" --form \"key=USDNAME\" --form \"value=USDVALUE\" --form \"masked=USDMASKED\" | jq } setVars ROX_CENTRAL_ENDPOINT USDROX_CENTRAL_ENDPOINT setVars ROX_API_TOKEN USDROX_API_TOKEN setVars GITOPS_AUTH_PASSWORD USDMY_GITLAB_TOKEN setVars GITOPS_AUTH_USERNAME USDMY_GITLAB_USER setVars QUAY_IO_CREDS_USR USDQUAY_IO_CREDS_USR setVars QUAY_IO_CREDS_PSW USDQUAY_IO_CREDS_PSW setVars COSIGN_SECRET_PASSWORD USDCOSIGN_SECRET_PASSWORD setVars COSIGN_SECRET_KEY USDCOSIGN_SECRET_KEY setVars COSIGN_PUBLIC_KEY USDCOSIGN_PUBLIC_KEY setVars TRUSTIFICATION_BOMBASTIC_API_URL \"USDTRUSTIFICATION_BOMBASTIC_API_URL\" setVars TRUSTIFICATION_OIDC_ISSUER_URL \"USDTRUSTIFICATION_OIDC_ISSUER_URL\" setVars TRUSTIFICATION_OIDC_CLIENT_ID \"USDTRUSTIFICATION_OIDC_CLIENT_ID\" setVars TRUSTIFICATION_OIDC_CLIENT_SECRET \"USDTRUSTIFICATION_OIDC_CLIENT_SECRET\" setVars TRUSTIFICATION_SUPPORTED_CYCLONEDX_VERSION \"USDTRUSTIFICATION_SUPPORTED_CYCLONEDX_VERSION\" setVars ARTIFACTORY_IO_CREDS_USR USDARTIFACTORY_IO_CREDS_USR setVars ARTIFACTORY_IO_CREDS_PSW USDARTIFACTORY_IO_CREDS_PSW setVars NEXUS_IO_CREDS_USR USDNEXUS_IO_CREDS_USR setVars NEXUS_IO_CREDS_PSW USDNEXUS_IO_CREDS_PSW setVars REKOR_HOST USDREKOR_HOST setVars TUF_MIRROR USDTUF_MIRROR",
"ROX_API_TOKEN USDROX_API_TOKEN false",
"source env_vars.sh",
"chmod +x ghub-set-vars ./ghub-set-vars your_repository_name"
] |
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/configuring_github_actions/adding-secrets-to-github-actions-for-secure-integration-with-external-tools_github-actions
|
Chapter 43. Installation and Booting
|
Chapter 43. Installation and Booting Custom system image creation with Image Builder available as a Technology Preview The Image Builder tool enables users to create customized RHEL images. Starting with Red Hat Enterprise Linux 7.6, Image Builder is available in the Extras channel as a Technology Preview in the lorax-composer package. With Image Builder, users can create custom system images which include additional packages. Composer functionality can be accessed through a graphical user interface in Web Console, or with a command line interface in the composer-cli tool. Image Builder output formats include, among others: ISO disk image qcow2 file for direct use with a virtual machine file system image file To learn more about Image Builder, see the Image Builder Guide . (BZ#1613966)
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/technology_previews_installation_and_booting
|
Chapter 2. New features
|
Chapter 2. New features This section describes new features and major enhancements introduced in Red Hat Satellite 6.16. 2.1. Web UI Compliance remediation wizard Previously, you had to remediate OpenSCAP compliance failures by manually creating a remote execution job to apply remediation scripts or snippets. With this update, Satellite web UI provides a compliance remediation wizard that you can use to remediate OpenSCAP compliance failures. For more information, see Remediating compliance failures in Managing Security Compliance . Jira:SAT-23240 [1] Manifest expiration warnings and extension of expiration date Users are now notified in the web UI before their subscription manifest expires. The number of days of notice is determined by the expire_soon_days setting. Refreshing a subscription manifest now extends the expiration date to one year from the current date. Refresh your manifest at least once a year so it will never expire. The subscription manifest expiration date is displayed on the Manage Manifest page under Content > Subscriptions . Jira:SAT-11630 [1] 2.2. Installation and upgrade satellite-maintain update command for minor releases The satellite-maintain update command replaces satellite-maintain upgrade with --target-version for updating minor (z-stream) versions. As the upgrade command is now dedicated to major upgrades, the --target-version parameter has been removed. Jira:SAT-21970 Puppet Server updated to version 8 Puppet Server 8 is now included in Satellite. Existing clients with Puppet agent 7 will continue to work against Puppet Server 8. Jira:SAT-24140 [1] Upgrading to Satellite 6.16 also upgrades to PostgreSQL 13 When you upgrade your Satellite Server 6.15 to version 6.16, the PostgreSQL database on the system is upgraded from version 12 to version 13. During the upgrade, a backup of the PostgreSQL data is created in the /var/lib/pgsql/data-old/ directory. You can safely remove this directory after the upgrade completes. To create the backup, you must ensure enough disk space is available in /var/lib/pgsql/ . The additional space required for the backup equals the amount of space currently consumed by PostgreSQL 12. After you run satellite-maintain to start the upgrade, the utility performs a check to verify the available disk space. Jira:SAT-23369 [1] SCRAM hashing for PostgreSQL passwords PostgreSQL 13 uses SCRAM hashing for passwords. The installer updates existing user passwords to SCRAM hashing. You can view the existing users and their password hashes by running the following command: Jira:SAT-24414 [1] 2.3. Content management Content repair command for Capsule To repair all content on Capsule, run the following command: Jira:SAT-16330 [1] Publishing content views during repository synchronization is blocked to prevent incorrect metadata An error message is displayed if you try to publish a content view while a child repository is performing one of the following actions: Sync Upload content Remove content Republish metadata Similarly, you cannot initiate the above tasks on a repository while a parent content view is being published. Without this error message, publishing a content view while synchronizing a repository can cause incorrect metadata. Jira:SAT-20281 [1] Containers can now be pushed to Satellite's container registry Each pushed container repository path must include the organization, product, and repository name. Example: podman push  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> The following example shows two containers that reference image streams that are present in the openshift namespace. One container handles the JNLP contract for launching Pods as Jenkins Agents. The other container uses an image with tools for building code in a particular coding language: kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\USD(JENKINS_SECRET) \USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> Note If you log in to the Jenkins console and make further changes to the pod template configuration after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the config map has changed, it will replace the pod template and overwrite those configuration changes. You cannot merge a new configuration with the existing configuration. Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. After it is installed, the OpenShift Container Platform Sync plugin monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin. The following rules apply: Removing the label or annotation from the config map, image stream, or image stream tag results in the deletion of any existing PodTemplate from the configuration of the Kubernetes plugin. If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin. Either creating appropriately labeled or annotated ConfigMap , ImageStream , or ImageStreamTag objects, or the adding of labels after their initial creation, leads to creating of a PodTemplate in the Kubernetes-plugin configuration. In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plugin configuration and overrides any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map. To use a container image as a Jenkins agent, the image must run the agent as an entry point. For more details, see the official Jenkins documentation . 12.2.7. Jenkins permissions If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod. Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Container Platform Jenkins image. If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted. The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master. Any pod templates that are automatically discovered by the OpenShift Container Platform sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account. For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template. If you do not specify a value for the service account, the default service account is used. Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod. 12.2.8. Creating a Jenkins service from a template Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup. The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether the Jenkins content persists across a pod restart. Note A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment. jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing. jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart. To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment. After you select which template you want, you must instantiate the template to be able to use Jenkins. Procedure Create a new Jenkins application using one of the following methods: A PV: USD oc new-app jenkins-persistent Or an emptyDir type volume where configuration does not persist across pod restarts: USD oc new-app jenkins-ephemeral With both templates, you can run oc describe on them to see all the parameters available for overriding. For example: USD oc describe jenkins-ephemeral 12.2.9. Using the Jenkins Kubernetes plugin In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig , openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image. Sample BuildConfig that uses the Jenkins Kubernetes plugin kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node("maven") { sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } triggers: - type: ConfigChange It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the preceding example, which overrides the container memory and specifies an environment variable. Sample BuildConfig that uses the Jenkins Kubernetes Plugin, specifying memory limit and environment variable kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: "mypod", 1 cloud: "openshift", 2 inheritFrom: "maven", 3 containers: [ containerTemplate(name: "jnlp", 4 image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5 resourceRequestMemory: "512Mi", 6 resourceLimitMemory: "512Mi", 7 envVars: [ envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8 ]) ]) { node("mypod") { 9 sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } } triggers: - type: ConfigChange 1 A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza. 2 The cloud value must be set to openshift . 3 The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform. 4 This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp . 5 Specify the container image name again. This is a known issue. 6 A memory request of 512 Mi is specified. 7 A memory limit of 512 Mi is specified. 8 An environment variable CONTAINER_HEAP_PERCENT , with value 0.25 , is specified. 9 The node stanza references the name of the defined pod template. By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile. Upstream Jenkins has more recently introduced a YAML declarative format for defining a podTemplate pipeline DSL in-line with your pipelines. An example of this format, using the sample java-builder pod template that is defined in the OpenShift Container Platform Jenkins image: def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml """ apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base:latest args: ['\USD(JENKINS_SECRET)', '\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true """ } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container("java") { sh "mvn --version" } } } } } 12.2.10. Jenkins memory requirements When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi . By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible. And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master. It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis. You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template. 12.2.11. Additional resources See Base image options for more information on the Red Hat Universal Base Images (UBI). 12.3. Jenkins agent OpenShift Container Platform provides Base, Maven, and Node.js images for use as Jenkins agents. The Base image for Jenkins agents does the following: Pulls in both the required tools, headless Java, the Jenkins JNLP client, and the useful ones, including git , tar , zip , and nss , among others. Establishes the JNLP agent as the entry point. Includes the oc client tooling for invoking command line operations from within Jenkins jobs. Provides Dockerfiles for both Red Hat Enterprise Linux (RHEL) and localdev images. The Maven v3.5, Node.js v10, and Node.js v12 images extend the Base image. They provide Dockerfiles for the Universal Base Image (UBI) that you can reference when building new agent images. Also note the contrib and contrib/bin subdirectories, which enable you to insert configuration files and executable scripts for your image. Important Use a version of the agent image that is appropriate for your OpenShift Container Platform release version. Embedding an oc client version that is not compatible with the OpenShift Container Platform version can cause unexpected behavior. The OpenShift Container Platform Jenkins image also defines the following sample pod templates to illustrate how you can use these agent images with the Jenkins Kubernetes plugin: The maven pod template, which uses a single container that uses the OpenShift Container Platform Maven Jenkins agent image. The nodejs pod template, which uses a single container that uses the OpenShift Container Platform Node.js Jenkins agent image. The java-builder pod template, which employs two containers. One is the jnlp container, which uses the OpenShift Container Platform Base agent image and handles the JNLP contract for starting and stopping Jenkins agents. The second is the java container which uses the java OpenShift Container Platform Sample ImageStream, which contains the various Java binaries, including the Maven binary mvn , for building code. The nodejs-builder pod template, which employs two containers. One is the jnlp container, which uses the OpenShift Container Platform Base agent image and handles the JNLP contract for starting and stopping Jenkins agents. The second is the nodejs container which uses the nodejs OpenShift Container Platform Sample ImageStream, which contains the various Node.js binaries, including the npm binary, for building code. 12.3.1. Jenkins agent images The OpenShift Container Platform Jenkins agent images are available on Quay.io or registry.redhat.io . Jenkins images are available through the Red Hat Registry: USD docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0> USD docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0> USD docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0> USD docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0> USD docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0> To use these images, you can either access them directly from Quay.io or registry.redhat.io or push them into your OpenShift Container Platform container image registry. 12.3.2. Jenkins agent environment variables Each Jenkins agent container can be configured with the following environment variables. Variable Definition Example values and settings JAVA_MAX_HEAP_PARAM , CONTAINER_HEAP_PERCENT , JENKINS_MAX_HEAP_UPPER_BOUND_MB These values control the maximum heap size of the Jenkins JVM. If JAVA_MAX_HEAP_PARAM is set, its value takes precedence. Otherwise, the maximum heap size is dynamically calculated as CONTAINER_HEAP_PERCENT of the container memory limit, optionally capped at JENKINS_MAX_HEAP_UPPER_BOUND_MB MiB. By default, the maximum heap size of the Jenkins JVM is set to 50% of the container memory limit with no cap. JAVA_MAX_HEAP_PARAM example setting: -Xmx512m CONTAINER_HEAP_PERCENT default: 0.5 , or 50% JENKINS_MAX_HEAP_UPPER_BOUND_MB example setting: 512 MiB JAVA_INITIAL_HEAP_PARAM , CONTAINER_INITIAL_PERCENT These values control the initial heap size of the Jenkins JVM. If JAVA_INITIAL_HEAP_PARAM is set, its value takes precedence. Otherwise, the initial heap size is dynamically calculated as CONTAINER_INITIAL_PERCENT of the dynamically calculated maximum heap size. By default, the JVM sets the initial heap size. JAVA_INITIAL_HEAP_PARAM example setting: -Xms32m CONTAINER_INITIAL_PERCENT example setting: 0.1 , or 10% CONTAINER_CORE_LIMIT If set, specifies an integer number of cores used for sizing numbers of internal JVM threads. Example setting: 2 JAVA_TOOL_OPTIONS Specifies options to apply to all JVMs running in this container. It is not recommended to override this value. Default: -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true JAVA_GC_OPTS Specifies Jenkins JVM garbage collection parameters. It is not recommended to override this value. Default: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 JENKINS_JAVA_OVERRIDES Specifies additional options for the Jenkins JVM. These options are appended to all other options, including the Java options above, and can be used to override any of them, if necessary. Separate each additional option with a space and if any option contains space characters, escape them with a backslash. Example settings: -Dfoo -Dbar ; -Dfoo=first\ value -Dbar=second\ value USE_JAVA_VERSION Specifies the version of Java version to use to run the agent in its container. The container base image has two versions of java installed: java-11 and java-1.8.0 . If you extend the container base image, you can specify any alternative version of java using its associated suffix. The default value is java-11 . Example setting: java-1.8.0 12.3.3. Jenkins agent memory requirements A JVM is used in all Jenkins agents to host the Jenkins JNLP agent as well as to run any Java applications such as javac , Maven, or Gradle. By default, the Jenkins JNLP agent JVM uses 50% of the container memory limit for its heap. This value can be modified by the CONTAINER_HEAP_PERCENT environment variable. It can also be capped at an upper limit or overridden entirely. By default, any other processes run in the Jenkins agent container, such as shell scripts or oc commands run from pipelines, cannot use more than the remaining 50% memory limit without provoking an OOM kill. By default, each further JVM process that runs in a Jenkins agent container uses up to 25% of the container memory limit for its heap. It might be necessary to tune this limit for many build workloads. 12.3.4. Jenkins agent Gradle builds Hosting Gradle builds in the Jenkins agent on OpenShift Container Platform presents additional complications because in addition to the Jenkins JNLP agent and Gradle JVMs, Gradle spawns a third JVM to run tests if they are specified. The following settings are suggested as a starting point for running Gradle builds in a memory constrained Jenkins agent on OpenShift Container Platform. You can modify these settings as required. Ensure the long-lived Gradle daemon is disabled by adding org.gradle.daemon=false to the gradle.properties file. Disable parallel build execution by ensuring org.gradle.parallel=true is not set in the gradle.properties file and that --parallel is not set as a command line argument. To prevent Java compilations running out-of-process, set java { options.fork = false } in the build.gradle file. Disable multiple additional test processes by ensuring test { maxParallelForks = 1 } is set in the build.gradle file. Override the Gradle JVM memory parameters by the GRADLE_OPTS , JAVA_OPTS or JAVA_TOOL_OPTIONS environment variables. Set the maximum heap size and JVM arguments for any Gradle test JVM by defining the maxHeapSize and jvmArgs settings in build.gradle , or through the -Dorg.gradle.jvmargs command line argument. 12.3.5. Jenkins agent pod retention Jenkins agent pods, are deleted by default after the build completes or is stopped. This behavior can be changed by the Kubernetes plugin pod retention setting. Pod retention can be set for all Jenkins builds, with overrides for each pod template. The following behaviors are supported: Always keeps the build pod regardless of build result. Default uses the plugin value, which is the pod template only. Never always deletes the pod. On Failure keeps the pod if it fails during the build. You can override pod retention in the pipeline Jenkinsfile: podTemplate(label: "mypod", cloud: "openshift", inheritFrom: "maven", podRetention: onFailure(), 1 containers: [ ... ]) { node("mypod") { ... } } 1 Allowed values for podRetention are never() , onFailure() , always() , and default() . Warning Pods that are kept might continue to run and count against resource quotas. 12.4. Source-to-image You can use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. You can use the Red Hat Java Source-to-Image for OpenShift documentation as a reference for runtime environments that use Java. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images include: .NET Java Go Node.js Perl PHP Python Ruby S2I images are available for you to use directly from the OpenShift Container Platform web console by following procedure: Log in to the OpenShift Container Platform web console using your login credentials. The default view for the OpenShift Container Platform web console is the Administrator perspective. Use the perspective switcher to switch to the Developer perspective. In the +Add view, select an existing project from the list or use the Project drop-down list to create a new project. Choose All services under the tile Developer Catalog . Select the Type Builder Images then can see the available S2I images. S2I images are also available though the Configuring the Cluster Samples Operator . 12.4.1. Source-to-image build process overview Source-to-image (S2I) produces ready-to-run images by injecting source code into a container that prepares that source code to be run. It performs the following steps: Runs the FROM <builder image> command Copies the source code to a defined location in the builder image Runs the assemble script in the builder image Sets the run script in the builder image as the default command Buildah then creates the container image. 12.4.2. Additional resources For instructions on using the Cluster Samples Operator, see the Configuring the Cluster Samples Operator . For more information on S2I builds, see the builds strategy documentation on S2I builds . For troubleshooting assistance for the S2I process, see Troubleshooting the Source-to-Image process . For an overview of creating images with S2I, see Creating images from source code with source-to-image . For an overview of testing S2I images, see About testing S2I images . 12.5. Customizing source-to-image images Source-to-image (S2I) builder images include assemble and run scripts, but the default behavior of those scripts is not suitable for all users. You can customize the behavior of an S2I builder that includes default scripts. 12.5.1. Invoking scripts embedded in an image Builder images provide their own version of the source-to-image (S2I) scripts that cover the most common use-cases. If these scripts do not fulfill your needs, S2I provides a way of overriding them by adding custom ones in the .s2i/bin directory. However, by doing this, you are completely replacing the standard scripts. In some cases, replacing the scripts is acceptable, but, in other scenarios, you can run a few commands before or after the scripts while retaining the logic of the script provided in the image. To reuse the standard scripts, you can create a wrapper script that runs custom logic and delegates further work to the default scripts in the image. Procedure Look at the value of the io.openshift.s2i.scripts-url label to determine the location of the scripts inside of the builder image: USD podman inspect --format='{{ index .Config.Labels "io.openshift.s2i.scripts-url" }}' wildfly/wildfly-centos7 Example output image:///usr/libexec/s2i You inspected the wildfly/wildfly-centos7 builder image and found out that the scripts are in the /usr/libexec/s2i directory. Create a script that includes an invocation of one of the standard scripts wrapped in other commands: .s2i/bin/assemble script #!/bin/bash echo "Before assembling" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo "After successful assembling" else echo "After failed assembling" fi exit USDrc This example shows a custom assemble script that prints the message, runs the standard assemble script from the image, and prints another message depending on the exit code of the assemble script. Important When wrapping the run script, you must use exec for invoking it to ensure signals are handled properly. The use of exec also precludes the ability to run additional commands after invoking the default image run script. .s2i/bin/run script #!/bin/bash echo "Before running application" exec /usr/libexec/s2i/run
|
[
"podman pull registry.redhat.io/openshift4/ose-jenkins:<v4.3.0>",
"oc new-app -e JENKINS_PASSWORD=<password> openshift4/ose-jenkins",
"oc describe serviceaccount jenkins",
"Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp",
"oc describe secret <secret name from above>",
"Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA",
"pluginId:pluginVersion",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }",
"docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0>",
"docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0>",
"docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0>",
"docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0>",
"docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0>",
"podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }",
"podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7",
"image:///usr/libexec/s2i",
"#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc",
"#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/images/using-images
|
5.26. cluster and gfs2-utils
|
5.26. cluster and gfs2-utils 5.26.1. RHBA-2012:0861 - cluster and gfs2-utils bug fix and enhancement update Updated cluster and gfs2-utils packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The cluster and gfs2-utils packages contain the core clustering libraries for Red Hat High Availability as well as utilities to maintain GFS2 file systems for users of Red Hat Resilient Storage. Bug Fixes BZ# 759603 A race condition existed when a node lost contact with the quorum device at the same time as the token timeout period expired. The nodes raced to fence, which could lead to a cluster failure. To prevent the race condition from occurring, the cman and qdiskd interaction timer has been improved. BZ# 750314 Previously, a cluster partition and merge during startup fencing was not detected correctly. As a consequence, the DLM (Distributed Lock Manager) lockspace operations could become unresponsive. With this update, the partition and merge event is now detected and handled properly. DLM lockspace operations no longer become unresponsive in the described scenario. BZ# 745538 Multiple ping command examples on the qdisk(5) manual page did not include the -w option. If the ping command is run without the option, the action can timeout. With this update, the -w option has been added to those ping commands. BZ# 745161 Due to a bug in libgfs2, sentinel directory entries were counted as if they were real entries. As a consequence, the mkfs.gfs2 utility created file systems which did not pass the fsck check when a large number of journal metadata blocks were required (for example, a file system with block size of 512, and 9 or more journals). With this update, incrementing the count of the directory entry is now avoided when dealing with sentinel entries. GFS2 file systems created with large numbers of journal metadata blocks now pass the fsck check cleanly. BZ# 806002 When a node fails and gets fenced, the node is usually rebooted and joins the cluster with a fresh state. However, if a block occurs during the rejoin operation, the node cannot rejoin the cluster and the attempt fails during boot. Previously, in such a case, the cman init script did not revert actions that had happened during startup and some daemons could be erroneously left running on a node. The underlying source code has been modified so that the cman init script now performs a full rollback when errors are encountered. No daemons are left running unnecessarily in this scenario. BZ# 804938 The RELAX NG schema used to validate the cluster.conf file previously did not recognize the totem.miss_count_const constant as a valid option. As a consequence, users were not able to validate cluster.conf when this option was in use. This option is now recognized correctly by the RELAX NG schema, and the cluster.conf file can be validated as expected. BZ# 819787 The cmannotifyd daemon is often started after the cman utility, which means that cmannotifyd does not receive or dispatch any notifications on the current cluster status at startup. This update modifies the cman connection loop to generate a notification that the configuration and membership have changed. BZ# 749864 Incorrect use of the free() function in the gfs2_edit code could lead to memory leaks and so cause various problems. For example, when the user executed the gfs2_edit savemeta command, the gfs2_edit utility could become unresponsive or even terminate unexpectedly. This update applies multiple upstream patches so that the free() function is now used correctly and memory leaks no longer occur. With this update, save statistics for the gfs2_edit savemeta command are now reported more often so that users know that the process is still running when saving a large dinode with a huge amount of metadata. BZ# 742595 Previously, the gfs2_grow utility failed to expand a GFS file system if the file system contained only one resource group. This was due to the old code being based on GFS1 (which had different fields) that calculated distances between resource groups and did not work with only one resource group. This update adds the rgrp_size() function in libgfs2, which calculates the size of the resource group instead of determining its distance from the resource group. A file system with only one resource group can now be expanded successfully. BZ# 742293 Previously, the gfs2_edit utility printed unclear error messages when the underlying device did not contain a valid GFS2 file system, which could be confusing. With this update, users are provided with additional information in the aforementioned scenario. BZ# 769400 Previously, the mkfs utility provided users with insufficient error messages when creating a GFS2 file system. The messages also contained absolute build paths and source code references, which was unwanted. A patch has been applied to provide users with comprehensive error messages in the described scenario. BZ# 753300 The gfs_controld daemon ignored an error returned by the dlm_controld daemon for the dlmc_fs_register() function while mounting a file system. This resulted in a successful mount, but recovery of a GFS file system could not be coordinated using Distributed Lock Manager (DLM). With this update, mounting a file system is not successful under these circumstances and an error message is returned instead. Enhancements BZ# 675723 , BZ# 803510 The gfs2_convert utility can be used on a GFS1 file system to convert a file system from GFS1 to GFS2 . However, the gfs2_convert utility required the user to run the gfs_fsck utility prior to conversion, but because this tool is not included in Red Hat Enterprise Linux 6, users had to use Red Hat Enterprise Linux 5 to run this utility. With this update, the gfs2_fsck utility now allows users to perform a complete GFS1 to GFS2 conversion on Red Hat Enterprise Linux 6 systems. BZ# 678372 Cluster tuning using the qdiskd daemon and the device-mapper-multipath utility is a very complex operation, and it was previously easy to misconfigure qdiskd in this setup, which could consequently lead to a cluster nodes failure. Input and output operations of the qdiskd daemon have been improved to automatically detect multipath-related timeouts without requiring manual configuration. Users can now easily deploy qdiskd with device-mapper-multipath. BZ# 733298 , BZ# 740552 Previously, the cman utility was not able to configure Redundant Ring Protocol (RRP) correctly in corosync, resulting in RRP deployments not working propely. With this update, cman has been improved to configure RRP properly and to perform extra sanity checks on user configurations. It is now easier to deploy a cluster with RRP and the user is provided with more extensive error reports. BZ# 745150 With this update, Red Hat Enterprise Linux High Availability has been validated against the VMware vSphere 5.0 release. BZ# 749228 With this update, the fence_scsi fencing agent has been validated for use in a two-node cluster with High Availability LVM (HA-LVM). All users of cluster and gfs2-utils are advised to upgrade to these updated package, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/cluster_and_gfs2-utils
|
Chapter 2. Upgrading Red Hat Satellite
|
Chapter 2. Upgrading Red Hat Satellite Use the following procedures to upgrade your existing Red Hat Satellite to Red Hat Satellite 6.16. 2.1. Satellite Server upgrade considerations This section describes how to upgrade Satellite Server from 6.15 to 6.16. You can upgrade from any minor version of Satellite Server 6.15. Before you begin Review Section 1.2, "Prerequisites" . Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Review and update your firewall configuration. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. If you have edited any of the default templates, back up the files either by cloning or exporting them. Cloning is the recommended method because that prevents them being overwritten in future updates or upgrades. To confirm if a template has been edited, you can view its History before you upgrade or view the changes in the audit log after an upgrade. In the Satellite web UI, navigate to Monitor > Audits and search for the template to see a record of changes made. If you use the export method, restore your changes by comparing the exported template and the default template, manually applying your changes. Optional: Clone your Satellite Server to test the upgrade. After you successfully test the upgrade on the clone, you can repeat the upgrade on your primary Satellite Server and discard the clone, or you can promote the clone to your primary Satellite Server and discard the primary Satellite Server. For more information, see Cloning Satellite Server in Administering Red Hat Satellite . Capsule considerations If you use content views to control updates to a Capsule Server's base operating system, or for Capsule Server repository, you must publish updated versions of those content views. Note that Satellite Server upgraded from 6.15 to 6.16 can use Capsule Servers still at 6.15. Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. FIPS mode You cannot upgrade Satellite Server from a RHEL base system that is not operating in FIPS mode to a RHEL base system that is operating in FIPS mode. To run Satellite Server on a Red Hat Enterprise Linux base system operating in FIPS mode, you must install Satellite on a freshly provisioned RHEL base system operating in FIPS mode. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . 2.2. Upgrading a disconnected Satellite Server Use this procedure if your Satellite Server is not connected to the Red Hat Content Delivery Network. Warning If you customized configuration files, either manually or using a tool such as Hiera, these changes are overwritten when you enter the satellite-maintain command during upgrading or updating. You can use the --noop option with the satellite-installer command to review the changes that are applied during upgrading or updating. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade . The hammer import and export commands have been replaced with hammer content-import and hammer content-export tooling. If you have scripts that are using hammer content-view version export , hammer content-view version export-legacy , hammer repository export , or their respective import commands, you have to adjust them to use the hammer content-export command instead, along with its respective import command. If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Before you begin Review and update your firewall configuration before upgrading your Satellite Server. For more information, see Port and firewall requirements in Installing Satellite Server in a disconnected network environment . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. All Satellite Servers must be on the same version. Upgrade disconnected Satellite Server Stop all Satellite services: Take a snapshot or create a backup: On a virtual machine, take a snapshot. On a physical machine, create a backup. Start all Satellite services: Optional: If you made manual edits to DNS or DHCP configuration in the /etc/zones.conf or /etc/dhcp/dhcpd.conf files, back up the configuration files because the installer only supports one domain or subnet, and therefore restoring changes from these backups might be required. Optional: If you made manual edits to DNS or DHCP configuration files and do not want to overwrite the changes, enter the following command: In the Satellite web UI, navigate to Hosts > Discovered hosts . If there are discovered hosts available, turn them off and then delete all entries under the Discovered hosts page. Select all other organizations in turn using the organization setting menu and repeat this action as required. Reboot these hosts after the upgrade has completed. Remove old repositories: Obtain the latest ISO files by following the Downloading the Binary DVD Images procedure in Installing Satellite Server in a disconnected network environment . Create directories to serve as a mount point, mount the ISO images, and configure the rhel8 repository by following the Configuring the base operating system with offline repositories procedure in Installing Satellite Server in a disconnected network environment . Do not install or update any packages at this stage. Configure the Satellite 6.16 repository from the ISO file. Copy the ISO file's repository data file for the Red Hat Satellite packages: Edit the /etc/yum.repos.d/satellite.repo file: Change the default InstallMedia repository name to Satellite-6.16 : Add the baseurl directive: Configure the Red Hat Satellite Maintenance repository from the ISO file. Copy the ISO file's repository data file for Red Hat Satellite Maintenance packages: Edit the /etc/yum.repos.d/satellite-maintenance.repo file: Change the default InstallMedia repository name to Satellite-Maintenance : Add the baseurl directive: Optional: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logs in /var/log/foreman-installer/satellite.log to check if the process completed successfully. Upgrade satellite-maintain to its version: If you are using an external database, upgrade your database to PostgreSQL 13. Use the health check option to determine if the system is ready for upgrade. When prompted, enter the hammer admin user credentials to configure satellite-maintain with hammer credentials. These changes are applied to the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: If the script fails due to missing or outdated packages, you must download and install these separately. For more information, see Resolving Package Dependency Errors in Installing Satellite Server in a disconnected network environment . If the command told you to reboot, then reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups that you made. If you make changes in the step, restart Satellite services: If you have the OpenSCAP plugin installed, but do not have the default OpenSCAP content available, enter the following command. In the Satellite web UI, navigate to Configure > Discovery Rules . Associate selected organizations and locations with discovery rules. steps Optional: Upgrade the operating system to Red Hat Enterprise Linux 9 on the upgraded Satellite Server. For more information, see Chapter 3, Upgrading Red Hat Enterprise Linux on Satellite or Capsule . 2.3. Synchronizing the new repositories You must enable and synchronize the new 6.16 repositories before you can upgrade Capsule Servers and Satellite clients. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . Toggle the Recommended Repositories switch to the On position. From the list of results, expand the following repositories and click the Enable icon to enable the repositories: To upgrade Satellite clients, enable the Red Hat Satellite Client 6 repositories for all Red Hat Enterprise Linux versions that clients use. If you have Capsule Servers, to upgrade them, enable the following repositories too: Red Hat Satellite Capsule 6.16 (for RHEL 8 x86_64) (RPMs) Red Hat Satellite Maintenance 6.16 (for RHEL 8 x86_64) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - BaseOS) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - AppStream) (RPMs) Note If the 6.16 repositories are not available, refresh the Red Hat Subscription Manifest. In the Satellite web UI, navigate to Content > Subscriptions , click Manage Manifest , then click Refresh . In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the product to view the available repositories. Select the repositories for 6.16. Note that Red Hat Satellite Client 6 does not have a 6.16 version. Choose Red Hat Satellite Client 6 instead. Click Synchronize Now . Important If an error occurs when you try to synchronize a repository, refresh the manifest. If the problem persists, raise a support request. Do not delete the manifest from the Customer Portal or in the Satellite web UI; this removes all the entitlements of your content hosts. If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . 2.4. Performing post-upgrade tasks Optional: If the default provisioning templates have been changed during the upgrade, recreate any templates cloned from the default templates. If the custom code is executed before and/or after the provisioning process, use custom provisioning snippets to avoid recreating cloned templates. For more information about configuring custom provisioning snippets, see Creating Custom Provisioning Snippets in Provisioning hosts . Pulp is introducing more data about container manifests to the API. This information allows Katello to display manifest labels, annotations, and information about the manifest type, such as if it is bootable or represents flatpak content. As a result, migrations must be performed to pull this content from manifests into the database. This migration takes time, so a pre-migration runs automatically after the upgrade to 6.16 to reduce future upgrade downtime. While the pre-migration is running, Satellite Server is fully functional but uses more hardware resources. 2.5. Upgrading Capsule Servers This section describes how to upgrade Capsule Servers from 6.15 to 6.16. Before you begin Review Section 1.2, "Prerequisites" . You must upgrade Satellite Server before you can upgrade any Capsule Servers. Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Ensure the Red Hat Satellite Capsule 6.16 repository is enabled in Satellite Server and synchronized. Ensure that you synchronize the required repositories on Satellite Server. For more information, see Section 2.3, "Synchronizing the new repositories" . If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . Ensure the Capsule's base system is registered to the newly upgraded Satellite Server. Ensure the Capsule has the correct organization and location settings in the newly upgraded Satellite Server. Review and update your firewall configuration prior to upgrading your Capsule Server. For more information, see Preparing Your Environment for Capsule Installation in Installing Capsule Server . Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrading Capsule Servers Create a backup. On a virtual machine, take a snapshot. On a physical machine, create a backup. For information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Clean yum cache: Synchronize the satellite-capsule-6.16-for-rhel-8-x86_64-rpms repository in the Satellite Server. Publish and promote a new version of the content view with which the Capsule is registered. Optional: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. The rubygem-foreman_maintain is installed from the Satellite Maintenance repository or upgraded from the Satellite Maintenance repository if currently installed. Ensure Capsule has access to satellite-maintenance-6.16-for-rhel-8-x86_64-rpms and execute: On Capsule Server, verify that the foreman_url setting points to the Satellite FQDN: Use the health check option to determine if the system is ready for upgrade: Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: If the command told you to reboot, then reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups made earlier. Optional: If you use custom repositories, ensure that you enable these custom repositories after the upgrade completes. Upgrading Capsule Servers using remote execution Create a backup or take a snapshot. For more information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . From the Job category list, select Maintenance Operations . From the Job template list, select Capsule Upgrade Playbook . In the Search Query field, enter the host name of the Capsule. Ensure that Apply to 1 host is displayed in the Resolves to field. In the target_version field, enter the target version of the Capsule. In the whitelist_options field, enter the options. Select the schedule for the job execution in Schedule . In the Type of query section, click Static Query . steps Optional: Upgrade the operating system to Red Hat Enterprise Linux 9 on the upgraded Satellite Server. For more information, see Chapter 3, Upgrading Red Hat Enterprise Linux on Satellite or Capsule . 2.6. Upgrading the external database You can upgrade an external database from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 while upgrading Satellite from 6.15 to 6.16. Prerequisites Create a new Red Hat Enterprise Linux 9 based host for PostgreSQL server that follows the external database on Red Hat Enterprise Linux 9 documentation. For more information, see Using External Databases with Satellite . Install PostgreSQL version 13 on the new Red Hat Enterprise Linux host. Procedure Create a backup. Restore the backup on the new server. Correct the permissions on the evr extension: If Satellite reaches the new database server via the old name, no further changes are required. Otherwise reconfigure Satellite to use the new name:
|
[
"satellite-maintain service stop",
"satellite-maintain service start",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dhcp-managed=false",
"rm /etc/yum.repos.d/*",
"cp /media/sat6/Satellite/media.repo /etc/yum.repos.d/satellite.repo",
"vi /etc/yum.repos.d/satellite.repo",
"[Satellite-6.16]",
"baseurl=file:///media/sat6/Satellite",
"cp /media/sat6/Maintenance/media.repo /etc/yum.repos.d/satellite-maintenance.repo",
"vi /etc/yum.repos.d/satellite-maintenance.repo",
"[Satellite-Maintenance]",
"baseurl=file:///media/sat6/Maintenance/",
"satellite-maintain self-upgrade --maintenance-repo-label Satellite-Maintenance",
"satellite-maintain upgrade check --whitelist=\"repositories-validate,repositories-setup\"",
"satellite-maintain upgrade run --whitelist=\"repositories-validate,repositories-setup\"",
"reboot",
"satellite-maintain service restart",
"foreman-rake foreman_openscap:bulk_upload:default",
"yum clean metadata",
"satellite-maintain self-upgrade",
"grep foreman_url /etc/foreman-proxy/settings.yml",
"satellite-maintain upgrade check",
"satellite-maintain upgrade run",
"reboot",
"runuser -l postgres -c \"psql -d foreman -c \\\"UPDATE pg_extension SET extowner = (SELECT oid FROM pg_authid WHERE rolname='foreman') WHERE extname='evr';\\\"\"",
"satellite-installer --foreman-db-host newpostgres.example.com --katello-candlepin-db-host newpostgres.example.com --foreman-proxy-content-pulpcore-postgresql-host newpostgres.example.com"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_disconnected_red_hat_satellite_to_6.16/Upgrading_satellite_upgrading-disconnected
|
11.2.9. Dialup Interfaces
|
11.2.9. Dialup Interfaces If you are connecting to the Internet via a dialup connection, a configuration file is necessary for the interface. PPP interface files are named using the following format: ifcfg-ppp X where X is a unique number corresponding to a specific interface. The PPP interface configuration file is created automatically when wvdial , or Kppp is used to create a dialup account. It is also possible to create and edit this file manually. The following is a typical /etc/sysconfig/network-scripts/ifcfg-ppp0 file: Serial Line Internet Protocol ( SLIP ) is another dialup interface, although it is used less frequently. SLIP files have interface configuration file names such as ifcfg-sl0 . Other options that may be used in these files include: DEFROUTE = answer where answer is one of the following: yes - Set this interface as the default route. no - Do not set this interface as the default route. DEMAND = answer where answer is one of the following: yes - This interface allows pppd to initiate a connection when someone attempts to use it. no - A connection must be manually established for this interface. IDLETIMEOUT = value where value is the number of seconds of idle activity before the interface disconnects itself. INITSTRING = string where string is the initialization string passed to the modem device. This option is primarily used in conjunction with SLIP interfaces. LINESPEED = value where value is the baud rate of the device. Possible standard values include 57600 , 38400 , 19200 , and 9600 . MODEMPORT = device where device is the name of the serial device that is used to establish the connection for the interface. MTU = value where value is the Maximum Transfer Unit ( MTU ) setting for the interface. The MTU refers to the largest number of bytes of data a frame can carry, not counting its header information. In some dialup situations, setting this to a value of 576 results in fewer packets dropped and a slight improvement to the throughput for a connection. NAME = name where name is the reference to the title given to a collection of dialup connection configurations. PAPNAME = name where name is the user name given during the Password Authentication Protocol ( PAP ) exchange that occurs to allow connections to a remote system. PERSIST = answer where answer is one of the following: yes - This interface should be kept active at all times, even if deactivated after a modem hang up. no - This interface should not be kept active at all times. REMIP = address where address is the IP address of the remote system. This is usually left unspecified. WVDIALSECT = name where name associates this interface with a dialer configuration in /etc/wvdial.conf . This file contains the phone number to be dialed and other important information for the interface.
|
[
"DEVICE=ppp0 NAME=test WVDIALSECT=test MODEMPORT=/dev/modem LINESPEED=115200 PAPNAME=test USERCTL=true ONBOOT=no PERSIST=no DEFROUTE=yes PEERDNS=yes DEMAND=no IDLETIMEOUT=600"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-networkscripts-interfaces-ppp0
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/managing_amq_broker/making-open-source-more-inclusive
|
Chapter 1. Transactions in JBoss EAP
|
Chapter 1. Transactions in JBoss EAP A transaction consists of two or more operations that must either all succeed or all fail. A successful outcome results in a commit, and a failed outcome results in a rollback. In a rollback, each member's state is reverted before the transaction attempts the commit. 1.1. Transaction Subsystem The transactions subsystem allows you to configure the Transaction Manager (TM) options, such as timeout values, transaction logging, statistics collection, and whether to use Jakarta Transactions. The transactions subsystem consists of four main elements: Core environment The core environment includes the TM interface that allows the JBoss EAP server to control transaction boundaries on behalf of the resource being managed. A transaction coordinator manages communication with the transactional objects and resources that participate in transactions. Recovery environment The recovery environment of the JBoss EAP transaction service ensures that the system applies the results of a transaction consistently to all the resources affected by the transaction. This operation continues even if any application process or the machine hosting them crashes or loses network connectivity. Coordinator environment The coordinator environment defines custom properties for the transaction, such as default timeout and logging statistics. Object store JBoss EAP transaction service uses an object store to record the outcomes of transactions in a persistent manner for failure recovery. The Recovery Manager scans the object store and other locations of information, for transactions and resources that might need recovery. 1.2. Properties of the Transaction The typical standard for a well-designed transaction is that it is atomic, consistent, isolated, and durable (ACID): Atomic All members of the transaction must make the same decision regarding committing or rolling back the transaction. Consistent Transactions produce consistent results and preserve application specific invariants. Isolation The data being operated on must be locked before modification to prevent processes outside the scope of the transaction from modifying the data. Durable The effects of a committed transaction are not lost, except in the event of a catastrophic failure. 1.3. Components of a Transaction Transaction Coordinator The coordinator governs the outcome of a transaction. It is responsible for ensuring that the web services invoked by the client arrive at a consistent outcome. Transaction Context Transaction context is the information about a transaction that is propagated, which allows the transaction to span multiple services. Transaction Participant Participants are the services enrolled in a transaction using a participant model. Transaction Service Transaction service captures the model of the underlying transaction protocol and coordinates with the participants affiliated with a transaction according to that model. Transaction API Transaction API provides an interface for transaction demarcation and the registration of participants. 1.4. Principles of Transaction Management 1.4.1. XA Versus Non-XA Transactions Non-XA transactions involve only one resource. They do not have a transaction coordinator and a single resource does all the transaction work. They are sometimes called local transactions. XA transactions involve multiple resources. They also have a coordinating transaction manager with one or more databases, or other resources like Jakarta Messaging, all participating in a single transaction. They are referred to as global transactions.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/managing_transactions_on_jboss_eap/eap_transactions
|
Chapter 1. Understanding API tiers
|
Chapter 1. Understanding API tiers Important This guidance does not cover layered Red Hat build of MicroShift offerings. Red Hat requests that application developers validate that any behavior they depend on is explicitly defined in the formal API documentation to prevent introducing dependencies on unspecified implementation-specific behavior or dependencies on bugs in a particular implementation of an API. For example, new releases of an ingress router may not be compatible with older releases if an application uses an undocumented API or relies on undefined behavior. 1.1. API tiers All commercially supported APIs, components, and features are associated under one of the following support levels: API tier 1 APIs and application operating environments (AOEs) are stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. API tier 2 APIs and AOEs are stable within a major release for a minimum of 9 months or 3 minor releases from the announcement of deprecation, whichever is longer. API tier 3 This level applies to languages, tools, applications, and optional Operators included with Red Hat build of MicroShift through Operator Hub. Each component will specify a lifetime during which the API and AOE will be supported. Newer versions of language runtime specific components will attempt to be as API and AOE compatible from minor version to minor version as possible. Minor version to minor version compatibility is not guaranteed, however. Components and developer tools that receive continuous updates through the Operator Hub, referred to as Operators and operands, should be considered API tier 3. Developers should use caution and understand how these components may change with each minor release. Users are encouraged to consult the compatibility guidelines documented by the component. API tier 4 No compatibility is provided. API and AOE can change at any point. These capabilities should not be used by applications needing long-term support. It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for use by actors external to the Operator and are intended to be hidden. If any CRD is not meant for use by actors external to the Operator, the operators.operatorframework.io/internal-objects annotation in the Operators ClusterServiceVersion (CSV) should be specified to signal that the corresponding resource is internal use only and the CRD may be explicitly labeled as tier 4. 1.2. Mapping API tiers to API groups For each API tier defined by Red Hat, we provide a mapping table for specific API groups where the upstream communities are committed to maintain forward compatibility. Any API group that does not specify an explicit compatibility level and is not specifically discussed below is assigned API tier 3 by default except for v1alpha1 APIs which are assigned tier 4 by default. 1.2.1. Support for Kubernetes API groups API groups that end with the suffix *.k8s.io or have the form version.<name> with no suffix are governed by the Kubernetes deprecation policy and follow a general mapping between API version exposed and corresponding support tier unless otherwise specified. API version example API tier v1 Tier 1 v1beta1 Tier 2 v1alpha1 Tier 4 1.2.2. Support for OpenShift API groups API groups that end with the suffix *.openshift.io are governed by the Red Hat build of MicroShift deprecation policy and follow a general mapping between API version exposed and corresponding compatibility level unless otherwise specified. API version example API tier route.openshift.io/v1 Tier 1 security.openshift.io/v1 Tier 1 except for RangeAllocation (tier 4) and *Reviews (tier 2) 1.3. API deprecation policy Red Hat build of MicroShift is composed of many components sourced from many upstream communities. It is anticipated that the set of components, the associated API interfaces, and correlated features will evolve over time and might require formal deprecation in order to remove the capability. 1.3.1. Deprecating parts of the API Red Hat build of MicroShift is a distributed system where multiple components interact with a shared state managed by the cluster control plane through a set of structured APIs. Per Kubernetes conventions, each API presented by Red Hat build of MicroShift is associated with a group identifier and each API group is independently versioned. Each API group is managed in a distinct upstream community including Kubernetes, Metal3, Multus, Operator Framework, Open Cluster Management, OpenShift itself, and more. While each upstream community might define their own unique deprecation policy for a given API group and version, Red Hat normalizes the community specific policy to one of the compatibility levels defined prior based on our integration in and awareness of each upstream community to simplify end-user consumption and support. The deprecation policy and schedule for APIs vary by compatibility level. The deprecation policy covers all elements of the API including: REST resources, also known as API objects Fields of REST resources Annotations on REST resources, excluding version-specific qualifiers Enumerated or constant values Other than the most recent API version in each group, older API versions must be supported after their announced deprecation for a duration of no less than: API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. The following rules apply to all tier 1 APIs: API elements can only be removed by incrementing the version of the group. API objects must be able to round-trip between API versions without information loss, with the exception of whole REST resources that do not exist in some versions. In cases where equivalent fields do not exist between versions, data will be preserved in the form of annotations during conversion. API versions in a given group can not deprecate until a new API version at least as stable is released, except in cases where the entire API object is being removed. 1.3.2. Deprecating CLI elements Client-facing CLI commands are not versioned in the same way as the API, but are user-facing component systems. The two major ways a user interacts with a CLI are through a command or flag, which is referred to in this context as CLI elements. All CLI elements default to API tier 1 unless otherwise noted or the CLI depends on a lower tier API. Element API tier Generally available (GA) Flags and commands Tier 1 Technology Preview Flags and commands Tier 3 Developer Preview Flags and commands Tier 4 1.3.3. Deprecating an entire component The duration and schedule for deprecating an entire component maps directly to the duration associated with the highest API tier of an API exposed by that component. For example, a component that surfaced APIs with tier 1 and 2 could not be removed until the tier 1 deprecation schedule was met. API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/understanding-api-support-tiers
|
Chapter 10. Deploying with Key Manager
|
Chapter 10. Deploying with Key Manager If you have deployed edge sites to the release of Red Hat OpenStack Platform 16.1.2, you will need to regenerate roles.yaml to implement this feature: To implement the feature, regenerate the roles.yaml file used for the DCN site's deployment. 10.1. Deploying edge sites with Key Manager If you want to include access to the Key Manager (barbican) service at edge sites, you must configure barbican at the central location. For information on installing and configuring barbican, see Deploying Barbican . You can configure access to barbican from DCN sites by including the /usr/share/openstack-tripleo-heat-templates/environments/services/barbican-edge.yaml .
|
[
"openstack overcloud roles generate DistributedComputeHCI DistributedComputeHCIScaleOut -o ~/dcn0/roles_data.yaml",
"openstack overcloud deploy --stack dcn0 --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/dcn0/roles_data.yaml . -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican-edge.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_distributed_compute_node_dcn_architecture/deploying_with_key_manager
|
Embedding in a RHEL for Edge image
|
Embedding in a RHEL for Edge image Red Hat build of MicroShift 4.18 Embedding in a RHEL for Edge image Red Hat OpenShift Documentation Team
|
[
"sudo mkdir -p /etc/osbuild-composer/repositories",
"sudo cp /usr/share/osbuild-composer/repositories/rhel-9.4.json /etc/osbuild-composer/repositories/rhel-9.4.json",
"\"baseurl\": \"https://cdn.redhat.com/content/eus/rhel<9>/<9.4>//baseos/os\", 1",
"sudo sed -i \"s,dist/rhel<9>/<9.4>/USD(uname -m)/baseos/,eus/rhel<9>/<9.4>/USD(uname -m)/baseos/,g\" /etc/osbuild-composer/repositories/rhel-<9.4>.json 1",
"\"baseurl\": \"https://cdn.redhat.com/content/eus/rhel<9>/<9.4>//appstream/os\", 1",
"sudo sed -i \"s,dist/rhel<9>/<9.4>/USD(uname -m)/appstream/,eus/rhel<9>/<9.4>/USD(uname -m)/appstream/,g\" /etc/osbuild-composer/repositories/rhel-<9.4>.json 1",
"sudo composer-cli sources info baseos | grep 'url ='",
"url = \"https://cdn.redhat.com/content/eus/rhel9/9.4/x86_64/baseos/os\"",
"sudo composer-cli sources info appstream | grep 'url ='",
"url = \"https://cdn.redhat.com/content/eus/rhel9/9.4/x86_64/appstream/os\"",
"cat > rhocp-4.18.toml <<EOF id = \"rhocp-4.18\" name = \"Red Hat OpenShift Container Platform 4.18 for RHEL 9\" type = \"yum-baseurl\" url = \"https://cdn.redhat.com/content/dist/layered/rhel9/USD(uname -m)/rhocp/4.18/os\" check_gpg = true check_ssl = true system = false rhsm = true EOF",
"cat > fast-datapath.toml <<EOF id = \"fast-datapath\" name = \"Fast Datapath for RHEL 9\" type = \"yum-baseurl\" url = \"https://cdn.redhat.com/content/dist/layered/rhel9/USD(uname -m)/fast-datapath/os\" check_gpg = true check_ssl = true system = false rhsm = true EOF",
"sudo composer-cli sources add rhocp-4.18.toml",
"sudo composer-cli sources add fast-datapath.toml",
"sudo composer-cli sources list",
"appstream baseos fast-datapath rhocp-4.18",
"cat > <microshift_blueprint.toml> <<EOF 1 name = \" <microshift_blueprint> \" 2 description = \"\" version = \"0.0.1\" modules = [] groups = [] [[packages]] name = \"microshift\" version = \"4.18.1\" 3 [customizations.services] enabled = [\"microshift\"] EOF",
"name = \"microshift_blueprint\" description = \"MicroShift 4.17.1 on x86_64 platform\" version = \"0.0.1\" modules = [] groups = [] [[packages]] 1 name = \"microshift\" version = \"4.17.1\" [customizations.services] 2 enabled = [\"microshift\"] [customizations.firewall] ports = [\"22:tcp\", \"80:tcp\", \"443:tcp\", \"5353:udp\", \"6443:tcp\", \"30000-32767:tcp\", \"30000-32767:udp\"] [[containers]] 3 source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f41e79c17e8b41f1b0a5a32c3e2dd7cd15b8274554d3f1ba12b2598a347475f4\" [[containers]] source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbc65f1fba7d92b36cf7514cd130fe83a9bd211005ddb23a8dc479e0eea645fd\" ... EOF",
"sudo composer-cli blueprints push <microshift_blueprint.toml> 1",
"sudo composer-cli blueprints depsolve <microshift_blueprint> | grep microshift 1",
"blueprint: microshift_blueprint v0.0.1 microshift-greenboot-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch microshift-networking-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.x86_64 microshift-release-info-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch microshift-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.x86_64 microshift-selinux-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch",
"sudo composer-cli blueprints depsolve <microshift_blueprint> 1",
"vi <microshift_blueprint.toml> 1",
"[[packages]] 1 name = \" <microshift-additional-package-name> \" 2 version = \"*\"",
"[[customizations.directories]] path = \"/etc/pki/ca-trust/source/anchors\"",
"[[customizations.files]] path = \"/etc/pki/ca-trust/source/anchors/cert1.pem\" data = \"<value>\"",
"sudo update-ca-trust",
"%post Update certificate trust storage in case new certificates were installed at /etc/pki/ca-trust/source/anchors directory update-ca-trust %end",
"BUILDID=USD(sudo composer-cli compose start-ostree --ref \"rhel/{op-system-version-major}/USD(uname -m)/edge\" <microshift_blueprint> edge-container | awk '/^Compose/ {print USD2}') 1",
"sudo composer-cli compose status",
"ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 RUNNING Wed Jun 7 12:26:23 2023 microshift_blueprint 0.0.1 edge-container",
"ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 FINISHED Wed Jun 7 12:32:37 2023 microshift_blueprint 0.0.1 edge-container",
"sudo composer-cli compose image USD{BUILDID}",
"sudo chown USD(whoami). USD{BUILDID}-container.tar",
"sudo chmod a+r USD{BUILDID}-container.tar",
"IMAGEID=USD(cat < \"./USD{BUILDID}-container.tar\" | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*')",
"sudo podman run -d --name=minimal-microshift-server -p 8085:8080 USD{IMAGEID}",
"cat > microshift-installer.toml <<EOF name = \"microshift-installer\" description = \"\" version = \"0.0.0\" modules = [] groups = [] packages = [] EOF",
"sudo composer-cli blueprints push microshift-installer.toml",
"BUILDID=USD(sudo composer-cli compose start-ostree --url http://localhost:8085/repo/ --ref \"rhel/9/USD(uname -m)/edge\" microshift-installer edge-installer | awk '{print USD2}')",
"sudo composer-cli compose status",
"ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d RUNNING Wed Jun 7 13:22:20 2023 microshift-installer 0.0.0 edge-installer",
"ID Status Time Blueprint Version Type Size c793c24f-ca2c-4c79-b5b7-ba36f5078e8d FINISHED Wed Jun 7 13:34:49 2023 microshift-installer 0.0.0 edge-installer",
"sudo composer-cli compose image USD{BUILDID}",
"sudo chown USD(whoami). USD{BUILDID}-installer.iso",
"sudo chmod a+r USD{BUILDID}-installer.iso",
"Partition disk such that it contains an LVM volume group called `rhel` with a 10GB+ system root but leaving free space for the LVMS CSI driver for storing data. # For example, a 20GB disk would be partitioned in the following way: # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 200M 0 part /boot/efi ├─sda1 8:1 0 800M 0 part /boot └─sda2 8:2 0 19G 0 part └─rhel-root 253:0 0 10G 0 lvm /sysroot # ostreesetup --nogpg --osname=rhel --remote=edge --url=file:///run/install/repo/ostree/repo --ref=rhel/<RHEL VERSION NUMBER>/x86_64/edge zerombr clearpart --all --initlabel part /boot/efi --fstype=efi --size=200 part /boot --fstype=xfs --asprimary --size=800 Uncomment this line to add a SWAP partition of the recommended size #part swap --fstype=swap --recommended part pv.01 --grow volgroup rhel pv.01 logvol / --vgname=rhel --fstype=xfs --size=10000 --name=root To add users, use a line such as the following user --name=<YOUR_USER_NAME> --password=<YOUR_HASHED_PASSWORD> --iscrypted --groups=<YOUR_USER_GROUPS>",
"%post --log=/var/log/anaconda/post-install.log --erroronfail Add the pull secret to CRI-O and set root user-only read/write permissions cat > /etc/crio/openshift-pull-secret << EOF YOUR_OPENSHIFT_PULL_SECRET_HERE EOF chmod 600 /etc/crio/openshift-pull-secret Configure the firewall with the mandatory rules for MicroShift firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 %end",
"sudo yum install -y lorax",
"sudo mkksiso <your_kickstart>.ks <your_installer>.iso <updated_installer>.iso",
"mkdir -p ~/.kube/",
"sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config",
"chmod go-r ~/.kube/config",
"oc get all -A",
"[user@microshift]USD sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp && sudo firewall-cmd --reload",
"[user@microshift]USD oc get all -A",
"[user@workstation]USD mkdir -p ~/.kube/",
"[user@workstation]USD MICROSHIFT_MACHINE=<name or IP address of MicroShift machine>",
"[user@workstation]USD ssh <user>@USDMICROSHIFT_MACHINE \"sudo cat /var/lib/microshift/resources/kubeadmin/USDMICROSHIFT_MACHINE/kubeconfig\" > ~/.kube/config",
"chmod go-r ~/.kube/config",
"[user@workstation]USD oc get all -A",
"rpm -ql microshift-release-info",
"/usr/share/microshift/release/release-x86_64.json",
"rpm2cpio microshift-release-info*.noarch.rpm | cpio -idmv",
"/usr/share/microshift/release/release-x86_64.json",
"RELEASE_FILE=/usr/share/microshift/release/release-USD(uname -m).json",
"jq -r '.images | .[]' USD{RELEASE_FILE} > microshift-container-refs.txt",
"\"<microshift_quay:8443>\": { 1 \"auth\": \"<microshift_auth>\", 2 \"email\": \"<[email protected]>\" 3 },",
"sudo dnf install -y skopeo",
"PULL_SECRET_FILE=~/.pull-secret-mirror.json",
"IMAGE_LIST_FILE=~/microshift-container-refs.txt",
"IMAGE_LOCAL_DIR=~/microshift-containers",
"while read -r src_img ; do # Remove the source registry prefix dst_img=USD(echo \"USD{src_img}\" | cut -d '/' -f 2-) # Run the image download command echo \"Downloading 'USD{src_img}' to 'USD{IMAGE_LOCAL_DIR}'\" mkdir -p \"USD{IMAGE_LOCAL_DIR}/USD{dst_img}\" skopeo copy --all --quiet --preserve-digests --authfile \"USD{PULL_SECRET_FILE}\" docker://\"USD{src_img}\" dir://\"USD{IMAGE_LOCAL_DIR}/USD{dst_img}\" done < \"USD{IMAGE_LIST_FILE}\"",
"sudo dnf install -y skopeo",
"IMAGE_PULL_FILE=~/.pull-secret-mirror.json",
"IMAGE_LOCAL_DIR=~/microshift-containers",
"TARGET_REGISTRY= <registry_host>:<port> 1",
"pushd \"USD{IMAGE_LOCAL_DIR}\" >/dev/null while read -r src_manifest ; do local src_img src_img=USD(dirname \"USD{src_manifest}\") # Add the target registry prefix and remove SHA local -r dst_img=\"USD{TARGET_REGISTRY}/USD{src_img}\" local -r dst_img_no_tag=\"USD{TARGET_REGISTRY}/USD{src_img%%[@:]*}\" # Run the image upload echo \"Uploading 'USD{src_img}' to 'USD{dst_img}'\" skopeo copy --all --quiet --preserve-digests --authfile \"USD{IMAGE_PULL_FILE}\" dir://\"USD{IMAGE_LOCAL_DIR}/USD{src_img}\" docker://\"USD{dst_img}\" done < <(find . -type f -name manifest.json -printf '%P\\n') popd >/dev/null",
"sudo update-ca-trust",
"[[registry]] prefix = \"\" location = \"<registry_host>:<port>\" 1 mirror-by-digest-only = true insecure = false [[registry]] prefix = \"\" location = \"quay.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"<registry_host>:<port>\" insecure = false [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"<registry_host>:<port>\" insecure = false [[registry]] prefix = \"\" location = \"registry.access.redhat.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"<registry_host>:<port>\" insecure = false",
"sudo systemctl enable microshift",
"sudo reboot",
"sudo dnf install -y microshift-release-info-<release_version>",
"sudo ls /usr/share/microshift/release",
"release-x86_64.json release-aarch64.json",
"sudo dnf download microshift-release-info- <release_version> 1",
"microshift-release-info-4.18.1.-202511101230.p0.g7dc6a00.assembly.4.18.1.el9.noarch.rpm",
"rpm2cpio <my_microshift_release_info> | cpio -idmv 1 ./usr/share/microshift/release/release-aarch64.json ./usr/share/microshift/release/release-x86_64.json",
"RELEASE_FILE= </path/to/your/release-USD(uname -m).json> 1",
"BLUEPRINT_FILE= </path/to/your/blueprint.toml> 1",
"jq -r '.images | .[] | (\"[[containers]]\\nsource = \\\"\" + . + \"\\\"\\n\")' \"USD{RELEASE_FILE}\" >> \"USD{BLUEPRINT_FILE}\"",
"[[containers]] source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82cfef91557f9a70cff5a90accba45841a37524e9b93f98a97b20f6b2b69e5db\" [[containers]] source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82cfef91557f9a70cff5a90accba45841a37524e9b93f98a97b20f6b2b69e5db\"",
"[[containers]] source = \" <my_image_pullspec_with_tag_or_digest> \"",
"[containers] auth_file_path = \"/etc/osbuild-worker/pull-secret.json\"",
"cat > <microshift_blueprint.toml> <<EOF 1 name = \" <microshift_blueprint> \" 2 description = \"\" version = \"0.0.1\" modules = [] groups = [] [[packages]] name = \"microshift\" version = \"4.18.1\" 3 [customizations.services] enabled = [\"microshift\"] EOF",
"name = \"microshift_blueprint\" description = \"MicroShift 4.17.1 on x86_64 platform\" version = \"0.0.1\" modules = [] groups = [] [[packages]] 1 name = \"microshift\" version = \"4.17.1\" [customizations.services] 2 enabled = [\"microshift\"] [customizations.firewall] ports = [\"22:tcp\", \"80:tcp\", \"443:tcp\", \"5353:udp\", \"6443:tcp\", \"30000-32767:tcp\", \"30000-32767:udp\"] [[containers]] 3 source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f41e79c17e8b41f1b0a5a32c3e2dd7cd15b8274554d3f1ba12b2598a347475f4\" [[containers]] source = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbc65f1fba7d92b36cf7514cd130fe83a9bd211005ddb23a8dc479e0eea645fd\" ... EOF",
"sudo composer-cli blueprints push <microshift_blueprint.toml> 1",
"sudo composer-cli blueprints depsolve <microshift_blueprint> | grep microshift 1",
"blueprint: microshift_blueprint v0.0.1 microshift-greenboot-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch microshift-networking-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.x86_64 microshift-release-info-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch microshift-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.x86_64 microshift-selinux-4.17.1-202305250827.p0.g4105d3b.assembly.4.17.1.el9.noarch",
"sudo composer-cli blueprints depsolve <microshift_blueprint> 1",
"BUILDID=USD(sudo composer-cli compose start-ostree --ref \"rhel/{op-system-version-major}/USD(uname -m)/edge\" <microshift_blueprint> edge-container | awk '/^Compose/ {print USD2}') 1",
"sudo composer-cli compose status",
"ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 RUNNING Wed Jun 7 12:26:23 2023 microshift_blueprint 0.0.1 edge-container",
"ID Status Time Blueprint Version Type Size cc3377ec-4643-4483-b0e7-6b0ad0ae6332 FINISHED Wed Jun 7 12:32:37 2023 microshift_blueprint 0.0.1 edge-container",
"sudo composer-cli compose image USD{BUILDID}",
"sudo chown USD(whoami). USD{BUILDID}-container.tar",
"sudo chmod a+r USD{BUILDID}-container.tar",
"IMAGEID=USD(cat < \"./USD{BUILDID}-container.tar\" | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*')",
"sudo podman run -d --name=minimal-microshift-server -p 8085:8080 USD{IMAGEID}",
"cat > microshift-installer.toml <<EOF name = \"microshift-installer\" description = \"\" version = \"0.0.0\" modules = [] groups = [] packages = [] EOF"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html-single/embedding_in_a_rhel_for_edge_image/index
|
2.5. Disaster Recovery
|
2.5. Disaster Recovery Disaster recovery is quicker and easier when the systems are virtualized. On a physical system, if something serious goes wrong, a complete reinstall of the operating system is usually required, resulting in hours of recovery time. However, if the systems are virtualized this is much faster due to the migration ability. If the requirements for live migration are followed, virtual machines can be restarted on another host, and the longest possible delay would be in restoring guest data. Also, because each of the virtualized systems are completely separate from each other, one system's downtime will not affect any others.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/sec-virtualization_getting_started-advantages-recovery
|
5.2. Defining the Certificate Authority Hierarchy
|
5.2. Defining the Certificate Authority Hierarchy The CA is the center of the PKI, so the relationship of CA systems, both to each other (CA hierarchy) and to other subsystems (security domain) is vital to planning a Certificate System PKI. When there are multiple CAs in a PKI, the CAs are structured in a hierarchy or chain. The CA above another CA in a chain is called an root CA ; a CA below another CA in the chain is called a subordinate CA . A CA can also be subordinate to a root outside of the Certificate System deployment; for example, a CA which functions as a root CA within the Certificate System deployment can be subordinate to a third-party CA. A Certificate Manager (or CA) is subordinate to another CA because its CA signing certificate, the certificate that allows it to issue certificates, is issued by another CA. The CA that issued the subordinate CA signing certificate controls the CA through the contents of the CA signing certificate. The CA can constrain the subordinate CA through the kinds of certificates that it can issue, the extensions that it is allowed to include in certificates, the number of levels of subordinate CAs the subordinate CA can create, and the validity period of certificates it can issue, as well as the validity period of the subordinate CAs signing certificate. Note Although a subordinate CA can create certificates that violate these constraints, a client authenticating a certificate that violates those constraints will not accept that certificate. A self-signed root CA signs its own CA signing certificate and sets its own constraints as well as setting constraints on the subordinate CA signing certificates it issues. A Certificate Manager can be configured as either a root CA or a subordinate CA. It is easiest to make the first CA installed a self-signed root, so that it is not necessary to apply to a third party and wait for the certificate to be issued. Before deploying the full PKI, however, consider whether to have a root CA, how many to have, and where both root and subordinate CAs will be located. 5.2.1. Subordination to a Public CA Chaining the Certificate System CA to a third-party public CA introduces the restrictions that public CAs place on the kinds of certificates the subordinate CA can issue and the nature of the certificate chain. For example, a CA that chains to a third-party CA might be restricted to issuing only Secure Multipurpose Internet Mail Extensions (S/MIME) and SSL/TLS client authentication certificates, but not SSL/TLS server certificates. There are other possible restrictions with using a public CA. This may not be acceptable for some PKI deployments. One benefit of chaining to a public CA is that the third party is responsible for submitting the root CA certificate to a web browser or other client software. This can be a major advantage for an extranet with certificates that are accessed by different companies with browsers that cannot be controlled by the administrator. Creating a root CA in the CA hierarchy means that the local organization must get the root certificate into all the browsers which will use the certificates issued by the Certificate System. There are tools to do this within an intranet, but it can be difficult to accomplish with an extranet. 5.2.2. Subordination to a Certificate System CA The Certificate System CA can function as a root CA , meaning that the server signs its own CA signing certificate as well as other CA signing certificates, creating an organization-specific CA hierarchy. The server can alternatively be configured as a subordinate CA , meaning the server's CA signing key is signed by another CA in an existing CA hierarchy. Setting up a Certificate System CA as the root CA means that the Certificate System administrator has control over all subordinate CAs by setting policies that control the contents of the CA signing certificates issued. A subordinate CA issues certificates by evaluating its own authentication and certificate profile configuration, without regard for the root CA's configuration. 5.2.3. Linked CA The Certificate System Certificate Manager can function as a linked CA , chaining up to many third-party or public CAs for validation; this provides cross-company trust, so applications can verify certificate chains outside the company certificate hierarchy. A Certificate Manager is chained to a third-party CA by requesting the Certificate Manager's CA signing certificate from the third-party CA. Related to this, the Certificate Manager also can issue cross-pair or cross-signed certificates . Cross-pair certificates create a trusted relationship between two separate CAs by issuing and storing cross-signed certificates between these two CAs. By using cross-signed certificate pairs, certificates issued outside the organization's PKI can be trusted within the system. These are also called bridge certificates , related to the Federal Bridge Certification Authority (FBCA) definition. 5.2.4. CA Cloning Instead of creating a hierarchy of root and subordinate CAs, it is possible to create multiple clones of a Certificate Manager and configure each clone to issue certificates within a range of serial numbers. A cloned Certificate Manager uses the same CA signing key and certificate as another Certificate Manager, the master Certificate Manager. Note If there is a chance that a subsystem will be cloned, then it is easiest to export its key pairs during the configuration process and save them to a secure location. The key pairs for the original Certificate Manager have to be available when the clone instance is configured, so that the clone can generate its certificates from the original Certificate Manager's keys. It is also possible to export the keys from the security databases at a later time, using the pk12util or the PKCS12Export commands. Because clone CAs and original CAs use the same CA signing key and certificate to sign the certificates they issue, the issuer name in all the certificates is the same. Clone CAs and the original Certificate Managers issue certificates as if they are a single CA. These servers can be placed on different hosts for high availability failover support. The advantage of cloning is that it distributes the Certificate Manager's load across several processes or even several physical machines. For a CA with a high enrollment demand, the distribution gained from cloning allows more certificates to be signed and issued in a given time interval. A cloned Certificate Manager has the same features, such as agent and end-entity gateway functions, of a regular Certificate Manager. The serial numbers for certificates issued by clones are distributed dynamically. The databases for each clone and master are replicated, so all of the certificate requests and issued certificates, both, are also replicated. This ensures that there are no serial number conflicts while serial number ranges do not have to be manually assigned to the cloned Certificate Managers.
| null |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/sect-Deployment_Guide-Planning_Your_CRTS-Arranging_the_Certificate_Authority_Hierarchy
|
Chapter 6. Known issues in Red Hat Decision Manager 7.13
|
Chapter 6. Known issues in Red Hat Decision Manager 7.13 This section lists known issues with Red Hat Decision Manager 7.13. 6.1. Spring Boot Wrong managed version of Spring Boot dependencies [ RHPAM-4413 ] Issue: The Spring Boot version (2.6.6) in the Maven repository is not certified by Red Hat yet. Therefore, you will receive a mismatch for the Narayana starter in productized binaries. Workaround: In your pom.xml file, define the following properties to override the current versions: <version.org.springframework.boot>2.5.12</version.org.springframework.boot> <version.me.snowdrop.narayana>2.6.3.redhat-00001</version.me.snowdrop.narayana> 6.2. Red Hat build of Kogito Red Hat build of Kogito is aligned with a non-supported Spring Boot version [ RHPAM-4419 ] Issue: Red Hat build of Kogito Spring Boot versions are managed in the kogito-spring-boot-bom file, which imports dependency management from the org.springframework.boot:spring-boot-dependencies BOM. The currently aligned version is 2.6.6, which does not map to any Red Hat supported versions. The latest supported version is 2.5.12. You must override dependency management with a BOM aligning to the Red Hat supported version which is 2.5.12. Workaround: To maintain the order of the imported BOM files, first include the Spring Boot BOM and then include the Red Hat build of Kogito specific BOM file: <dependencyManagement> <dependencies> <dependency> <groupId>dev.snowdrop</groupId> <artifactId>snowdrop-dependencies</artifactId> <version>2.5.12.Final-redhat-00001</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-spring-boot-bom</artifactId> <version>1.13.2.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Align the version of spring-boot-maven-plugin to the same version in your project build configuration file: <plugins> <plugin> <groupId>org.kie.kogito</groupId> <artifactId>kogito-maven-plugin</artifactId> <version>1.13.2.redhat-00002</version> <extensions>true</extensions> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>2.5.12</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> Red Hat build of Kogito on Spring Boot leads to misalignment of Kafka-clients version [ RHPAM-4418 ] Issue: The Kafka-clients dependency version for Red Hat build of Kogito Spring Boot is by default managed by the org.springframework.boot:spring-boot-dependencies BOM. Depending on which Spring Boot version is used, users might end up with an unsupported or vulnerable version of Kafka-clients. You must override the default dependency in your kogito-spring-boot-bom to make sure you have the expected Kafka-clients version. Workaround: In your projects, define dependencyManagement explicitly for org.apache.kafka:kafka-clients dependency to use the version released by AMQ Streams.
|
[
"<version.org.springframework.boot>2.5.12</version.org.springframework.boot> <version.me.snowdrop.narayana>2.6.3.redhat-00001</version.me.snowdrop.narayana>",
"<dependencyManagement> <dependencies> <dependency> <groupId>dev.snowdrop</groupId> <artifactId>snowdrop-dependencies</artifactId> <version>2.5.12.Final-redhat-00001</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-spring-boot-bom</artifactId> <version>1.13.2.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>",
"<plugins> <plugin> <groupId>org.kie.kogito</groupId> <artifactId>kogito-maven-plugin</artifactId> <version>1.13.2.redhat-00002</version> <extensions>true</extensions> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>2.5.12</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins>"
] |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/rn-7.13-known-issues-ref
|
Providing feedback on Red Hat build of OpenJDK documentation
|
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.11/proc-providing-feedback-on-redhat-documentation
|
Chapter 61. Enabling notifications in the Red Hat Customer Portal
|
Chapter 61. Enabling notifications in the Red Hat Customer Portal You can enable notifications in the Red Hat Customer Portal to receive product updates and announcements. These notifications inform you of updated or added documentation, product releases, and patch updates related to your installation. With notifications enabled, you can more readily apply product updates as they become available in the Red Hat Customer Portal to keep your distribution current with the latest enhancements and fixes. Prerequisites You have a Red Hat Customer Portal account and are logged in. Procedure In the top-right corner of the Red Hat Customer Portal window, click your profile name and click Notifications . Select the Notifications tab and click Manage Notifications . to Follow , select Products from the drop-down menu, and then select Red Hat Process Automation Manager or Red Hat Decision Manager from the drop-down menu that appears. Click Save Notification to finish. You can add notifications for any other products as needed in the same way.
| null |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/patches-notifications-proc_patching-upgrading
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/migrating_to_identity_management_on_rhel_9/proc_providing-feedback-on-red-hat-documentation_migrating-to-idm-on-rhel-9
|
Package Manifest
|
Package Manifest Red Hat Satellite 6.11 Package Listing for Red Hat Satellite Red Hat Satellite Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/package_manifest/index
|
Chapter 15. IngressController [operator.openshift.io/v1]
|
Chapter 15. IngressController [operator.openshift.io/v1] Description IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers Whenever possible, sensible defaults for the platform are used. See each field for more details. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the IngressController. status object status is the most recently observed status of the IngressController. 15.1.1. .spec Description spec is the specification of the desired behavior of the IngressController. Type object Property Type Description clientTLS object clientTLS specifies settings for requesting and verifying client certificates, which can be used to enable mutual TLS for edge-terminated and reencrypt routes. defaultCertificate object defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. domain string domain is a DNS name serviced by the ingress controller and is used to configure multiple features: * For the LoadBalancerService endpoint publishing strategy, domain is used to configure DNS records. See endpointPublishingStrategy. * When using a generated default certificate, the certificate will be valid for domain and its subdomains. See defaultCertificate. * The value is published to individual Route statuses so that end-users know where to target external DNS records. domain must be unique among all IngressControllers, and cannot be updated. If empty, defaults to ingress.config.openshift.io/cluster .spec.domain. endpointPublishingStrategy object endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. httpCompression object httpCompression defines a policy for HTTP traffic compression. By default, there is no HTTP compression. httpEmptyRequestsPolicy string httpEmptyRequestsPolicy describes how HTTP connections should be handled if the connection times out before a request is received. Allowed values for this field are "Respond" and "Ignore". If the field is set to "Respond", the ingress controller sends an HTTP 400 or 408 response, logs the connection (if access logging is enabled), and counts the connection in the appropriate metrics. If the field is set to "Ignore", the ingress controller closes the connection without sending a response, logging the connection, or incrementing metrics. The default value is "Respond". Typically, these connections come from load balancers' health probes or Web browsers' speculative connections ("preconnect") and can be safely ignored. However, these requests may also be caused by network errors, and so setting this field to "Ignore" may impede detection and diagnosis of problems. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. httpErrorCodePages object httpErrorCodePages specifies a configmap with custom error pages. The administrator must create this configmap in the openshift-config namespace. This configmap should have keys in the format "error-page-<error code>.http", where <error code> is an HTTP error code. For example, "error-page-503.http" defines an error page for HTTP 503 responses. Currently only error pages for 503 and 404 responses can be customized. Each value in the configmap should be the full response, including HTTP headers. Eg- https://raw.githubusercontent.com/openshift/router/fadab45747a9b30cc3f0a4b41ad2871f95827a93/images/router/haproxy/conf/error-page-503.http If this field is empty, the ingress controller uses the default error pages. httpHeaders object httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. logging object logging defines parameters for what should be logged where. If this field is empty, operational logs are enabled but access logs are disabled. namespaceSelector object namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. nodePlacement object nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. replicas integer replicas is the desired number of ingress controller replicas. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. The value of replicas is set based on the value of a chosen field in the Infrastructure CR. If defaultPlacement is set to ControlPlane, the chosen field will be controlPlaneTopology. If it is set to Workers the chosen field will be infrastructureTopology. Replicas will then be set to 1 or 2 based whether the chosen field's value is SingleReplica or HighlyAvailable, respectively. These defaults are subject to change. routeAdmission object routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. routeSelector object routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. tlsSecurityProfile object tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. tuningOptions object tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. unsupportedConfigOverrides `` unsupportedConfigOverrides allows specifying unsupported configuration options. Its use is unsupported. 15.1.2. .spec.clientTLS Description clientTLS specifies settings for requesting and verifying client certificates, which can be used to enable mutual TLS for edge-terminated and reencrypt routes. Type object Required clientCA clientCertificatePolicy Property Type Description allowedSubjectPatterns array (string) allowedSubjectPatterns specifies a list of regular expressions that should be matched against the distinguished name on a valid client certificate to filter requests. The regular expressions must use PCRE syntax. If this list is empty, no filtering is performed. If the list is nonempty, then at least one pattern must match a client certificate's distinguished name or else the ingress controller rejects the certificate and denies the connection. clientCA object clientCA specifies a configmap containing the PEM-encoded CA certificate bundle that should be used to verify a client's certificate. The administrator must create this configmap in the openshift-config namespace. clientCertificatePolicy string clientCertificatePolicy specifies whether the ingress controller requires clients to provide certificates. This field accepts the values "Required" or "Optional". Note that the ingress controller only checks client certificates for edge-terminated and reencrypt TLS routes; it cannot check certificates for cleartext HTTP or passthrough TLS routes. 15.1.3. .spec.clientTLS.clientCA Description clientCA specifies a configmap containing the PEM-encoded CA certificate bundle that should be used to verify a client's certificate. The administrator must create this configmap in the openshift-config namespace. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 15.1.4. .spec.defaultCertificate Description defaultCertificate is a reference to a secret containing the default certificate served by the ingress controller. When Routes don't specify their own certificate, defaultCertificate is used. The secret must contain the following keys and data: tls.crt: certificate file contents tls.key: key file contents If unset, a wildcard certificate is automatically generated and used. The certificate is valid for the ingress controller domain (and subdomains) and the generated certificate's CA will be automatically integrated with the cluster's trust store. If a wildcard certificate is used and shared by multiple HTTP/2 enabled routes (which implies ALPN) then clients (i.e., notably browsers) are at liberty to reuse open connections. This means a client can reuse a connection to another route and that is likely to fail. This behaviour is generally known as connection coalescing. The in-use certificate (whether generated or user-specified) will be automatically integrated with OpenShift's built-in OAuth server. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . 15.1.5. .spec.endpointPublishingStrategy Description endpointPublishingStrategy is used to publish the ingress controller endpoints to other networks, enable load balancer integrations, etc. If unset, the default is based on infrastructure.config.openshift.io/cluster .status.platform: AWS: LoadBalancerService (with External scope) Azure: LoadBalancerService (with External scope) GCP: LoadBalancerService (with External scope) IBMCloud: LoadBalancerService (with External scope) AlibabaCloud: LoadBalancerService (with External scope) Libvirt: HostNetwork Any other platform types (including None) default to HostNetwork. endpointPublishingStrategy cannot be updated. Type object Required type Property Type Description hostNetwork object hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. loadBalancer object loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. nodePort object nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. private object private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. type string type is the publishing strategy to use. Valid values are: * LoadBalancerService Publishes the ingress controller using a Kubernetes LoadBalancer Service. In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. * HostNetwork Publishes the ingress controller on node ports where the ingress controller is deployed. In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. * Private Does not publish the ingress controller. In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. * NodePortService Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. 15.1.6. .spec.endpointPublishingStrategy.hostNetwork Description hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. Type object Property Type Description httpPort integer httpPort is the port on the host which should be used to listen for HTTP requests. This field should be set when port 80 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 80. httpsPort integer httpsPort is the port on the host which should be used to listen for HTTPS requests. This field should be set when port 443 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 443. protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. statsPort integer statsPort is the port on the host where the stats from the router are published. The value should not coincide with the NodePort range of the cluster. If an external load balancer is configured to forward connections to this IngressController, the load balancer should use this port for health checks. The load balancer can send HTTP probes on this port on a given node, with the path /healthz/ready to determine if the ingress controller is ready to receive traffic on the node. For proper operation the load balancer must not forward traffic to a node until the health check reports ready. The load balancer should also stop forwarding requests within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 to 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy respectively, are well-tested values. When the value is 0 or is not specified it defaults to 1936. 15.1.7. .spec.endpointPublishingStrategy.loadBalancer Description loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. Type object Required dnsManagementPolicy scope Property Type Description allowedSourceRanges `` allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-<ingresscontroller name>" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. dnsManagementPolicy string dnsManagementPolicy indicates if the lifecycle of the wildcard DNS record associated with the load balancer service will be managed by the ingress operator. It defaults to Managed. Valid values are: Managed and Unmanaged. providerParameters object providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. scope string scope indicates the scope at which the load balancer is exposed. Possible values are "External" and "Internal". 15.1.8. .spec.endpointPublishingStrategy.loadBalancer.providerParameters Description providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. Type object Required type Property Type Description aws object aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. gcp object gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. ibm object ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. type string type is the underlying infrastructure provider for the load balancer. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "IBM", "Nutanix", "OpenStack", and "VSphere". 15.1.9. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws Description aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. Type object Required type Property Type Description classicLoadBalancer object classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. networkLoadBalancer object networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. type string type is the type of AWS load balancer to instantiate for an ingresscontroller. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 15.1.10. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer Description classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. Type object Property Type Description connectionIdleTimeout string connectionIdleTimeout specifies the maximum time period that a connection may be idle before the load balancer closes the connection. The value must be parseable as a time duration value; see https://pkg.go.dev/time#ParseDuration . A nil or zero value means no opinion, in which case a default value is used. The default value for this field is 60s. This default is subject to change. subnets object subnets specifies the subnets to which the load balancer will attach. The subnets may be specified by either their ID or name. The total number of subnets is limited to 10. In order for the load balancer to be provisioned with subnets, each subnet must exist, each subnet must be from a different availability zone, and the load balancer service must be recreated to pick up new values. When omitted from the spec, the subnets will be auto-discovered for each availability zone. Auto-discovered subnets are not reported in the status of the IngressController object. 15.1.11. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer.subnets Description subnets specifies the subnets to which the load balancer will attach. The subnets may be specified by either their ID or name. The total number of subnets is limited to 10. In order for the load balancer to be provisioned with subnets, each subnet must exist, each subnet must be from a different availability zone, and the load balancer service must be recreated to pick up new values. When omitted from the spec, the subnets will be auto-discovered for each availability zone. Auto-discovered subnets are not reported in the status of the IngressController object. Type object Property Type Description ids array (string) ids specifies a list of AWS subnets by subnet ID. Subnet IDs must start with "subnet-", consist only of alphanumeric characters, must be exactly 24 characters long, must be unique, and the total number of subnets specified by ids and names must not exceed 10. names array (string) names specifies a list of AWS subnets by subnet name. Subnet names must not start with "subnet-", must not include commas, must be under 256 characters in length, must be unique, and the total number of subnets specified by ids and names must not exceed 10. 15.1.12. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer Description networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. Type object Property Type Description eipAllocations array (string) eipAllocations is a list of IDs for Elastic IP (EIP) addresses that are assigned to the Network Load Balancer. The following restrictions apply: eipAllocations can only be used with external scope, not internal. An EIP can be allocated to only a single IngressController. The number of EIP allocations must match the number of subnets that are used for the load balancer. Each EIP allocation must be unique. A maximum of 10 EIP allocations are permitted. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html for general information about configuration, characteristics, and limitations of Elastic IP addresses. subnets object subnets specifies the subnets to which the load balancer will attach. The subnets may be specified by either their ID or name. The total number of subnets is limited to 10. In order for the load balancer to be provisioned with subnets, each subnet must exist, each subnet must be from a different availability zone, and the load balancer service must be recreated to pick up new values. When omitted from the spec, the subnets will be auto-discovered for each availability zone. Auto-discovered subnets are not reported in the status of the IngressController object. 15.1.13. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer.subnets Description subnets specifies the subnets to which the load balancer will attach. The subnets may be specified by either their ID or name. The total number of subnets is limited to 10. In order for the load balancer to be provisioned with subnets, each subnet must exist, each subnet must be from a different availability zone, and the load balancer service must be recreated to pick up new values. When omitted from the spec, the subnets will be auto-discovered for each availability zone. Auto-discovered subnets are not reported in the status of the IngressController object. Type object Property Type Description ids array (string) ids specifies a list of AWS subnets by subnet ID. Subnet IDs must start with "subnet-", consist only of alphanumeric characters, must be exactly 24 characters long, must be unique, and the total number of subnets specified by ids and names must not exceed 10. names array (string) names specifies a list of AWS subnets by subnet name. Subnet names must not start with "subnet-", must not include commas, must be under 256 characters in length, must be unique, and the total number of subnets specified by ids and names must not exceed 10. 15.1.14. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.gcp Description gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. Type object Property Type Description clientAccess string clientAccess describes how client access is restricted for internal load balancers. Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access 15.1.15. .spec.endpointPublishingStrategy.loadBalancer.providerParameters.ibm Description ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. Type object Property Type Description protocol string protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. 15.1.16. .spec.endpointPublishingStrategy.nodePort Description nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.17. .spec.endpointPublishingStrategy.private Description private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.18. .spec.httpCompression Description httpCompression defines a policy for HTTP traffic compression. By default, there is no HTTP compression. Type object Property Type Description mimeTypes array (string) mimeTypes is a list of MIME types that should have compression applied. This list can be empty, in which case the ingress controller does not apply compression. Note: Not all MIME types benefit from compression, but HAProxy will still use resources to try to compress if instructed to. Generally speaking, text (html, css, js, etc.) formats benefit from compression, but formats that are already compressed (image, audio, video, etc.) benefit little in exchange for the time and cpu spent on compressing again. See https://joehonton.medium.com/the-gzip-penalty-d31bd697f1a2 15.1.19. .spec.httpErrorCodePages Description httpErrorCodePages specifies a configmap with custom error pages. The administrator must create this configmap in the openshift-config namespace. This configmap should have keys in the format "error-page-<error code>.http", where <error code> is an HTTP error code. For example, "error-page-503.http" defines an error page for HTTP 503 responses. Currently only error pages for 503 and 404 responses can be customized. Each value in the configmap should be the full response, including HTTP headers. Eg- https://raw.githubusercontent.com/openshift/router/fadab45747a9b30cc3f0a4b41ad2871f95827a93/images/router/haproxy/conf/error-page-503.http If this field is empty, the ingress controller uses the default error pages. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 15.1.20. .spec.httpHeaders Description httpHeaders defines policy for HTTP headers. If this field is empty, the default values are used. Type object Property Type Description actions object actions specifies options for modifying headers and their values. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be modified for TLS passthrough connections. Setting the HSTS ( Strict-Transport-Security ) header is not supported via actions. Strict-Transport-Security may only be configured using the "haproxy.router.openshift.io/hsts_header" route annotation, and only in accordance with the policy specified in Ingress.Spec.RequiredHSTSPolicies. Any actions defined here are applied after any actions related to the following other fields: cache-control, spec.clientTLS, spec.httpHeaders.forwardedHeaderPolicy, spec.httpHeaders.uniqueId, and spec.httpHeaders.headerNameCaseAdjustments. In case of HTTP request headers, the actions specified in spec.httpHeaders.actions on the Route will be executed after the actions specified in the IngressController's spec.httpHeaders.actions field. In case of HTTP response headers, the actions specified in spec.httpHeaders.actions on the IngressController will be executed after the actions specified in the Route's spec.httpHeaders.actions field. Headers set using this API cannot be captured for use in access logs. The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. Please refer to the documentation for that API field for more details. forwardedHeaderPolicy string forwardedHeaderPolicy specifies when and how the IngressController sets the Forwarded, X-Forwarded-For, X-Forwarded-Host, X-Forwarded-Port, X-Forwarded-Proto, and X-Forwarded-Proto-Version HTTP headers. The value may be one of the following: * "Append", which specifies that the IngressController appends the headers, preserving existing headers. * "Replace", which specifies that the IngressController sets the headers, replacing any existing Forwarded or X-Forwarded-* headers. * "IfNone", which specifies that the IngressController sets the headers if they are not already set. * "Never", which specifies that the IngressController never sets the headers, preserving any existing headers. By default, the policy is "Append". headerNameCaseAdjustments `` headerNameCaseAdjustments specifies case adjustments that can be applied to HTTP header names. Each adjustment is specified as an HTTP header name with the desired capitalization. For example, specifying "X-Forwarded-For" indicates that the "x-forwarded-for" HTTP header should be adjusted to have the specified capitalization. These adjustments are only applied to cleartext, edge-terminated, and re-encrypt routes, and only when using HTTP/1. For request headers, these adjustments are applied only for routes that have the haproxy.router.openshift.io/h1-adjust-case=true annotation. For response headers, these adjustments are applied to all HTTP responses. If this field is empty, no request headers are adjusted. uniqueId object uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. 15.1.21. .spec.httpHeaders.actions Description actions specifies options for modifying headers and their values. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be modified for TLS passthrough connections. Setting the HSTS ( Strict-Transport-Security ) header is not supported via actions. Strict-Transport-Security may only be configured using the "haproxy.router.openshift.io/hsts_header" route annotation, and only in accordance with the policy specified in Ingress.Spec.RequiredHSTSPolicies. Any actions defined here are applied after any actions related to the following other fields: cache-control, spec.clientTLS, spec.httpHeaders.forwardedHeaderPolicy, spec.httpHeaders.uniqueId, and spec.httpHeaders.headerNameCaseAdjustments. In case of HTTP request headers, the actions specified in spec.httpHeaders.actions on the Route will be executed after the actions specified in the IngressController's spec.httpHeaders.actions field. In case of HTTP response headers, the actions specified in spec.httpHeaders.actions on the IngressController will be executed after the actions specified in the Route's spec.httpHeaders.actions field. Headers set using this API cannot be captured for use in access logs. The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. Please refer to the documentation for that API field for more details. Type object Property Type Description request array request is a list of HTTP request headers to modify. Actions defined here will modify the request headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for request headers will be executed before Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". request[] object IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. response array response is a list of HTTP response headers to modify. Actions defined here will modify the response headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for response headers will be executed after Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". response[] object IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. 15.1.22. .spec.httpHeaders.actions.request Description request is a list of HTTP request headers to modify. Actions defined here will modify the request headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for request headers will be executed before Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Type array 15.1.23. .spec.httpHeaders.actions.request[] Description IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required action name Property Type Description action object action specifies actions to perform on headers, such as setting or deleting headers. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 15.1.24. .spec.httpHeaders.actions.request[].action Description action specifies actions to perform on headers, such as setting or deleting headers. Type object Required type Property Type Description set object set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 15.1.25. .spec.httpHeaders.actions.request[].action.set Description set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 15.1.26. .spec.httpHeaders.actions.response Description response is a list of HTTP response headers to modify. Actions defined here will modify the response headers of all requests passing through an ingress controller. These actions are applied to all Routes i.e. for all connections handled by the ingress controller defined within a cluster. IngressController actions for response headers will be executed after Route actions. Currently, actions may define to either Set or Delete headers values. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Type array 15.1.27. .spec.httpHeaders.actions.response[] Description IngressControllerHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required action name Property Type Description action object action specifies actions to perform on headers, such as setting or deleting headers. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Host, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 15.1.28. .spec.httpHeaders.actions.response[].action Description action specifies actions to perform on headers, such as setting or deleting headers. Type object Required type Property Type Description set object set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 15.1.29. .spec.httpHeaders.actions.response[].action.set Description set specifies how the HTTP header should be set. This field is required when type is Set and forbidden otherwise. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 15.1.30. .spec.httpHeaders.uniqueId Description uniqueId describes configuration for a custom HTTP header that the ingress controller should inject into incoming HTTP requests. Typically, this header is configured to have a value that is unique to the HTTP request. The header can be used by applications or included in access logs to facilitate tracing individual HTTP requests. If this field is empty, no such header is injected into requests. Type object Property Type Description format string format specifies the format for the injected HTTP header's value. This field has no effect unless name is specified. For the HAProxy-based ingress controller implementation, this format uses the same syntax as the HTTP log format. If the field is empty, the default value is "%{+X}o\\ %ci:%cp_%fi:%fp_%Ts_%rt:%pid"; see the corresponding HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 name string name specifies the name of the HTTP header (for example, "unique-id") that the ingress controller should inject into HTTP requests. The field's value must be a valid HTTP header name as defined in RFC 2616 section 4.2. If the field is empty, no header is injected. 15.1.31. .spec.logging Description logging defines parameters for what should be logged where. If this field is empty, operational logs are enabled but access logs are disabled. Type object Property Type Description access object access describes how the client requests should be logged. If this field is empty, access logging is disabled. 15.1.32. .spec.logging.access Description access describes how the client requests should be logged. If this field is empty, access logging is disabled. Type object Required destination Property Type Description destination object destination is where access logs go. httpCaptureCookies `` httpCaptureCookies specifies HTTP cookies that should be captured in access logs. If this field is empty, no cookies are captured. httpCaptureHeaders object httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. httpLogFormat string httpLogFormat specifies the format of the log message for an HTTP request. If this field is empty, log messages use the implementation's default HTTP log format. For HAProxy's default HTTP log format, see the HAProxy documentation: http://cbonte.github.io/haproxy-dconv/2.0/configuration.html#8.2.3 Note that this format only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). It does not affect the log format for TLS passthrough connections. logEmptyRequests string logEmptyRequests specifies how connections on which no request is received should be logged. Typically, these empty requests come from load balancers' health probes or Web browsers' speculative connections ("preconnect"), in which case logging these requests may be undesirable. However, these requests may also be caused by network errors, in which case logging empty requests may be useful for diagnosing the errors. In addition, these requests may be caused by port scans, in which case logging empty requests may aid in detecting intrusion attempts. Allowed values for this field are "Log" and "Ignore". The default value is "Log". 15.1.33. .spec.logging.access.destination Description destination is where access logs go. Type object Required type Property Type Description container object container holds parameters for the Container logging destination. Present only if type is Container. syslog object syslog holds parameters for a syslog endpoint. Present only if type is Syslog. type string type is the type of destination for logs. It must be one of the following: * Container The ingress operator configures the sidecar container named "logs" on the ingress controller pod and configures the ingress controller to write logs to the sidecar. The logs are then available as container logs. The expectation is that the administrator configures a custom logging solution that reads logs from this sidecar. Note that using container logs means that logs may be dropped if the rate of logs exceeds the container runtime's or the custom logging solution's capacity. * Syslog Logs are sent to a syslog endpoint. The administrator must specify an endpoint that can receive syslog messages. The expectation is that the administrator has configured a custom syslog instance. 15.1.34. .spec.logging.access.destination.container Description container holds parameters for the Container logging destination. Present only if type is Container. Type object Property Type Description maxLength integer maxLength is the maximum length of the log message. Valid values are integers in the range 480 to 8192, inclusive. When omitted, the default value is 1024. 15.1.35. .spec.logging.access.destination.syslog Description syslog holds parameters for a syslog endpoint. Present only if type is Syslog. Type object Required address port Property Type Description address string address is the IP address of the syslog endpoint that receives log messages. facility string facility specifies the syslog facility of log messages. If this field is empty, the facility is "local1". maxLength integer maxLength is the maximum length of the log message. Valid values are integers in the range 480 to 4096, inclusive. When omitted, the default value is 1024. port integer port is the UDP port number of the syslog endpoint that receives log messages. 15.1.36. .spec.logging.access.httpCaptureHeaders Description httpCaptureHeaders defines HTTP headers that should be captured in access logs. If this field is empty, no headers are captured. Note that this option only applies to cleartext HTTP connections and to secure HTTP connections for which the ingress controller terminates encryption (that is, edge-terminated or reencrypt connections). Headers cannot be captured for TLS passthrough connections. Type object Property Type Description request `` request specifies which HTTP request headers to capture. If this field is empty, no request headers are captured. response `` response specifies which HTTP response headers to capture. If this field is empty, no response headers are captured. 15.1.37. .spec.namespaceSelector Description namespaceSelector is used to filter the set of namespaces serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.38. .spec.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.39. .spec.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.40. .spec.nodePlacement Description nodePlacement enables explicit control over the scheduling of the ingress controller. If unset, defaults are used. See NodePlacement for more details. Type object Property Type Description nodeSelector object nodeSelector is the node selector applied to ingress controller deployments. If set, the specified selector is used and replaces the default. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. When defaultPlacement is Workers, the default is: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' When defaultPlacement is ControlPlane, the default is: kubernetes.io/os: linux node-role.kubernetes.io/master: '' These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. tolerations array tolerations is a list of tolerations applied to ingress controller deployments. The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 15.1.41. .spec.nodePlacement.nodeSelector Description nodeSelector is the node selector applied to ingress controller deployments. If set, the specified selector is used and replaces the default. If unset, the default depends on the value of the defaultPlacement field in the cluster config.openshift.io/v1/ingresses status. When defaultPlacement is Workers, the default is: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' When defaultPlacement is ControlPlane, the default is: kubernetes.io/os: linux node-role.kubernetes.io/master: '' These defaults are subject to change. Note that using nodeSelector.matchExpressions is not supported. Only nodeSelector.matchLabels may be used. This is a limitation of the Kubernetes API: the pod spec does not allow complex expressions for node selectors. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.42. .spec.nodePlacement.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.43. .spec.nodePlacement.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.44. .spec.nodePlacement.tolerations Description tolerations is a list of tolerations applied to ingress controller deployments. The default is an empty list. See https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ Type array 15.1.45. .spec.nodePlacement.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 15.1.46. .spec.routeAdmission Description routeAdmission defines a policy for handling new route claims (for example, to allow or deny claims across namespaces). If empty, defaults will be applied. See specific routeAdmission fields for details about their defaults. Type object Property Type Description namespaceOwnership string namespaceOwnership describes how host name claims across namespaces should be handled. Value must be one of: - Strict: Do not allow routes in different namespaces to claim the same host. - InterNamespaceAllowed: Allow routes to claim different paths of the same host name across namespaces. If empty, the default is Strict. wildcardPolicy string wildcardPolicy describes how routes with wildcard policies should be handled for the ingress controller. WildcardPolicy controls use of routes [1] exposed by the ingress controller based on the route's wildcard policy. [1] https://github.com/openshift/api/blob/master/route/v1/types.go Note: Updating WildcardPolicy from WildcardsAllowed to WildcardsDisallowed will cause admitted routes with a wildcard policy of Subdomain to stop working. These routes must be updated to a wildcard policy of None to be readmitted by the ingress controller. WildcardPolicy supports WildcardsAllowed and WildcardsDisallowed values. If empty, defaults to "WildcardsDisallowed". 15.1.47. .spec.routeSelector Description routeSelector is used to filter the set of Routes serviced by the ingress controller. This is useful for implementing shards. If unset, the default is no filtering. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.48. .spec.routeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.49. .spec.routeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.50. .spec.tlsSecurityProfile Description tlsSecurityProfile specifies settings for TLS connections for ingresscontrollers. If unset, the default is based on the apiservers.config.openshift.io/cluster resource. Note that when using the Old, Intermediate, and Modern profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 may cause a new profile configuration to be applied to the ingress controller, resulting in a rollout. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: VersionTLS12 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: VersionTLS13 old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: VersionTLS10 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 15.1.51. .spec.tuningOptions Description tuningOptions defines parameters for adjusting the performance of ingress controller pods. All fields are optional and will use their respective defaults if not set. See specific tuningOptions fields for more details. Setting fields within tuningOptions is generally not recommended. The default values are suitable for most configurations. Type object Property Type Description clientFinTimeout string clientFinTimeout defines how long a connection will be held open while waiting for the client response to the server/backend closing the connection. If unset, the default timeout is 1s clientTimeout string clientTimeout defines how long a connection will be held open while waiting for a client response. If unset, the default timeout is 30s connectTimeout string ConnectTimeout defines the maximum time to wait for a connection attempt to a server/backend to succeed. This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". When omitted, this means the user has no opinion and the platform is left to choose a reasonable default. This default is subject to change over time. The current default is 5s. headerBufferBytes integer headerBufferBytes describes how much memory should be reserved (in bytes) for IngressController connection sessions. Note that this value must be at least 16384 if HTTP/2 is enabled for the IngressController ( https://tools.ietf.org/html/rfc7540 ). If this field is empty, the IngressController will use a default value of 32768 bytes. Setting this field is generally not recommended as headerBufferBytes values that are too small may break the IngressController and headerBufferBytes values that are too large could cause the IngressController to use significantly more memory than necessary. headerBufferMaxRewriteBytes integer headerBufferMaxRewriteBytes describes how much memory should be reserved (in bytes) from headerBufferBytes for HTTP header rewriting and appending for IngressController connection sessions. Note that incoming HTTP requests will be limited to (headerBufferBytes - headerBufferMaxRewriteBytes) bytes, meaning headerBufferBytes must be greater than headerBufferMaxRewriteBytes. If this field is empty, the IngressController will use a default value of 8192 bytes. Setting this field is generally not recommended as headerBufferMaxRewriteBytes values that are too small may break the IngressController and headerBufferMaxRewriteBytes values that are too large could cause the IngressController to use significantly more memory than necessary. healthCheckInterval string healthCheckInterval defines how long the router waits between two consecutive health checks on its configured backends. This value is applied globally as a default for all routes, but may be overridden per-route by the route annotation "router.openshift.io/haproxy.health.check.interval". Expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, eg "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". Setting this to less than 5s can cause excess traffic due to too frequent TCP health checks and accompanying SYN packet storms. Alternatively, setting this too high can result in increased latency, due to backend servers that are no longer available, but haven't yet been detected as such. An empty or zero healthCheckInterval means no opinion and IngressController chooses a default, which is subject to change over time. Currently the default healthCheckInterval value is 5s. Currently the minimum allowed value is 1s and the maximum allowed value is 2147483647ms (24.85 days). Both are subject to change over time. maxConnections integer maxConnections defines the maximum number of simultaneous connections that can be established per HAProxy process. Increasing this value allows each ingress controller pod to handle more connections but at the cost of additional system resources being consumed. Permitted values are: empty, 0, -1, and the range 2000-2000000. If this field is empty or 0, the IngressController will use the default value of 50000, but the default is subject to change in future releases. If the value is -1 then HAProxy will dynamically compute a maximum value based on the available ulimits in the running container. Selecting -1 (i.e., auto) will result in a large value being computed (~520000 on OpenShift >=4.10 clusters) and therefore each HAProxy process will incur significant memory usage compared to the current default of 50000. Setting a value that is greater than the current operating system limit will prevent the HAProxy process from starting. If you choose a discrete value (e.g., 750000) and the router pod is migrated to a new node, there's no guarantee that that new node has identical ulimits configured. In such a scenario the pod would fail to start. If you have nodes with different ulimits configured (e.g., different tuned profiles) and you choose a discrete value then the guidance is to use -1 and let the value be computed dynamically at runtime. You can monitor memory usage for router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}'. You can monitor memory usage of individual HAProxy processes in router containers with the following metric: 'container_memory_working_set_bytes{container="router",namespace="openshift-ingress"}/container_processes{container="router",namespace="openshift-ingress"}'. reloadInterval string reloadInterval defines the minimum interval at which the router is allowed to reload to accept new changes. Increasing this value can prevent the accumulation of HAProxy processes, depending on the scenario. Increasing this interval can also lessen load imbalance on a backend's servers when using the roundrobin balancing algorithm. Alternatively, decreasing this value may decrease latency since updates to HAProxy's configuration can take effect more quickly. The value must be a time duration value; see https://pkg.go.dev/time#ParseDuration . Currently, the minimum value allowed is 1s, and the maximum allowed value is 120s. Minimum and maximum allowed values may change in future versions of OpenShift. Note that if a duration outside of these bounds is provided, the value of reloadInterval will be capped/floored and not rejected (e.g. a duration of over 120s will be capped to 120s; the IngressController will not reject and replace this disallowed value with the default). A zero value for reloadInterval tells the IngressController to choose the default, which is currently 5s and subject to change without notice. This field expects an unsigned duration string of decimal numbers, each with optional fraction and a unit suffix, e.g. "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "ms" U+00B5 or "ms" U+03BC), "ms", "s", "m", "h". Note: Setting a value significantly larger than the default of 5s can cause latency in observing updates to routes and their endpoints. HAProxy's configuration will be reloaded less frequently, and newly created routes will not be served until the subsequent reload. serverFinTimeout string serverFinTimeout defines how long a connection will be held open while waiting for the server/backend response to the client closing the connection. If unset, the default timeout is 1s serverTimeout string serverTimeout defines how long a connection will be held open while waiting for a server/backend response. If unset, the default timeout is 30s threadCount integer threadCount defines the number of threads created per HAProxy process. Creating more threads allows each ingress controller pod to handle more connections, at the cost of more system resources being used. HAProxy currently supports up to 64 threads. If this field is empty, the IngressController will use the default value. The current default is 4 threads, but this may change in future releases. Setting this field is generally not recommended. Increasing the number of HAProxy threads allows ingress controller pods to utilize more CPU time under load, potentially starving other pods if set too high. Reducing the number of threads may cause the ingress controller to perform poorly. tlsInspectDelay string tlsInspectDelay defines how long the router can hold data to find a matching route. Setting this too short can cause the router to fall back to the default certificate for edge-terminated or reencrypt routes even when a better matching certificate could be used. If unset, the default inspect delay is 5s tunnelTimeout string tunnelTimeout defines how long a tunnel connection (including websockets) will be held open while the tunnel is idle. If unset, the default timeout is 1h 15.1.52. .status Description status is the most recently observed status of the IngressController. Type object Property Type Description availableReplicas integer availableReplicas is number of observed available replicas according to the ingress controller deployment. conditions array conditions is a list of conditions and their status. Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) There are additional conditions which indicate the status of other ingress controller features and capabilities. * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. conditions[] object OperatorCondition is just the standard condition fields. domain string domain is the actual domain in use. endpointPublishingStrategy object endpointPublishingStrategy is the actual strategy in use. namespaceSelector object namespaceSelector is the actual namespaceSelector in use. observedGeneration integer observedGeneration is the most recent generation observed. routeSelector object routeSelector is the actual routeSelector in use. selector string selector is a label selector, in string format, for ingress controller pods corresponding to the IngressController. The number of matching pods should equal the value of availableReplicas. tlsProfile object tlsProfile is the TLS connection configuration that is in effect. 15.1.53. .status.conditions Description conditions is a list of conditions and their status. Available means the ingress controller deployment is available and servicing route and ingress resources (i.e, .status.availableReplicas equals .spec.replicas) There are additional conditions which indicate the status of other ingress controller features and capabilities. * LoadBalancerManaged - True if the following conditions are met: * The endpoint publishing strategy requires a service load balancer. - False if any of those conditions are unsatisfied. * LoadBalancerReady - True if the following conditions are met: * A load balancer is managed. * The load balancer is ready. - False if any of those conditions are unsatisfied. * DNSManaged - True if the following conditions are met: * The endpoint publishing strategy and platform support DNS. * The ingress controller domain is set. * dns.config.openshift.io/cluster configures DNS zones. - False if any of those conditions are unsatisfied. * DNSReady - True if the following conditions are met: * DNS is managed. * DNS records have been successfully created. - False if any of those conditions are unsatisfied. Type array 15.1.54. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 15.1.55. .status.endpointPublishingStrategy Description endpointPublishingStrategy is the actual strategy in use. Type object Required type Property Type Description hostNetwork object hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. loadBalancer object loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. nodePort object nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. private object private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. type string type is the publishing strategy to use. Valid values are: * LoadBalancerService Publishes the ingress controller using a Kubernetes LoadBalancer Service. In this configuration, the ingress controller deployment uses container networking. A LoadBalancer Service is created to publish the deployment. See: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer If domain is set, a wildcard DNS record will be managed to point at the LoadBalancer Service's external name. DNS records are managed only in DNS zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Wildcard DNS management is currently supported only on the AWS, Azure, and GCP platforms. * HostNetwork Publishes the ingress controller on node ports where the ingress controller is deployed. In this configuration, the ingress controller deployment uses host networking, bound to node ports 80 and 443. The user is responsible for configuring an external load balancer to publish the ingress controller via the node ports. * Private Does not publish the ingress controller. In this configuration, the ingress controller deployment uses container networking, and is not explicitly published. The user must manually publish the ingress controller. * NodePortService Publishes the ingress controller using a Kubernetes NodePort Service. In this configuration, the ingress controller deployment uses container networking. A NodePort Service is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift; however, to support static port allocations, user changes to the node port field of the managed NodePort Service will preserved. 15.1.56. .status.endpointPublishingStrategy.hostNetwork Description hostNetwork holds parameters for the HostNetwork endpoint publishing strategy. Present only if type is HostNetwork. Type object Property Type Description httpPort integer httpPort is the port on the host which should be used to listen for HTTP requests. This field should be set when port 80 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 80. httpsPort integer httpsPort is the port on the host which should be used to listen for HTTPS requests. This field should be set when port 443 is already in use. The value should not coincide with the NodePort range of the cluster. When the value is 0 or is not specified it defaults to 443. protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. statsPort integer statsPort is the port on the host where the stats from the router are published. The value should not coincide with the NodePort range of the cluster. If an external load balancer is configured to forward connections to this IngressController, the load balancer should use this port for health checks. The load balancer can send HTTP probes on this port on a given node, with the path /healthz/ready to determine if the ingress controller is ready to receive traffic on the node. For proper operation the load balancer must not forward traffic to a node until the health check reports ready. The load balancer should also stop forwarding requests within a maximum of 45 seconds after /healthz/ready starts reporting not-ready. Probing every 5 to 10 seconds, with a 5-second timeout and with a threshold of two successful or failed requests to become healthy or unhealthy respectively, are well-tested values. When the value is 0 or is not specified it defaults to 1936. 15.1.57. .status.endpointPublishingStrategy.loadBalancer Description loadBalancer holds parameters for the load balancer. Present only if type is LoadBalancerService. Type object Required dnsManagementPolicy scope Property Type Description allowedSourceRanges `` allowedSourceRanges specifies an allowlist of IP address ranges to which access to the load balancer should be restricted. Each range must be specified using CIDR notation (e.g. "10.0.0.0/8" or "fd00::/8"). If no range is specified, "0.0.0.0/0" for IPv4 and "::/0" for IPv6 are used by default, which allows all source addresses. To facilitate migration from earlier versions of OpenShift that did not have the allowedSourceRanges field, you may set the service.beta.kubernetes.io/load-balancer-source-ranges annotation on the "router-<ingresscontroller name>" service in the "openshift-ingress" namespace, and this annotation will take effect if allowedSourceRanges is empty on OpenShift 4.12. dnsManagementPolicy string dnsManagementPolicy indicates if the lifecycle of the wildcard DNS record associated with the load balancer service will be managed by the ingress operator. It defaults to Managed. Valid values are: Managed and Unmanaged. providerParameters object providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. scope string scope indicates the scope at which the load balancer is exposed. Possible values are "External" and "Internal". 15.1.58. .status.endpointPublishingStrategy.loadBalancer.providerParameters Description providerParameters holds desired load balancer information specific to the underlying infrastructure provider. If empty, defaults will be applied. See specific providerParameters fields for details about their defaults. Type object Required type Property Type Description aws object aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. gcp object gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. ibm object ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. type string type is the underlying infrastructure provider for the load balancer. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "IBM", "Nutanix", "OpenStack", and "VSphere". 15.1.59. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws Description aws provides configuration settings that are specific to AWS load balancers. If empty, defaults will be applied. See specific aws fields for details about their defaults. Type object Required type Property Type Description classicLoadBalancer object classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. networkLoadBalancer object networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. type string type is the type of AWS load balancer to instantiate for an ingresscontroller. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 15.1.60. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer Description classicLoadBalancerParameters holds configuration parameters for an AWS classic load balancer. Present only if type is Classic. Type object Property Type Description connectionIdleTimeout string connectionIdleTimeout specifies the maximum time period that a connection may be idle before the load balancer closes the connection. The value must be parseable as a time duration value; see https://pkg.go.dev/time#ParseDuration . A nil or zero value means no opinion, in which case a default value is used. The default value for this field is 60s. This default is subject to change. subnets object subnets specifies the subnets to which the load balancer will attach. The subnets may be specified by either their ID or name. The total number of subnets is limited to 10. In order for the load balancer to be provisioned with subnets, each subnet must exist, each subnet must be from a different availability zone, and the load balancer service must be recreated to pick up new values. When omitted from the spec, the subnets will be auto-discovered for each availability zone. Auto-discovered subnets are not reported in the status of the IngressController object. 15.1.61. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.classicLoadBalancer.subnets Description subnets specifies the subnets to which the load balancer will attach. The subnets may be specified by either their ID or name. The total number of subnets is limited to 10. In order for the load balancer to be provisioned with subnets, each subnet must exist, each subnet must be from a different availability zone, and the load balancer service must be recreated to pick up new values. When omitted from the spec, the subnets will be auto-discovered for each availability zone. Auto-discovered subnets are not reported in the status of the IngressController object. Type object Property Type Description ids array (string) ids specifies a list of AWS subnets by subnet ID. Subnet IDs must start with "subnet-", consist only of alphanumeric characters, must be exactly 24 characters long, must be unique, and the total number of subnets specified by ids and names must not exceed 10. names array (string) names specifies a list of AWS subnets by subnet name. Subnet names must not start with "subnet-", must not include commas, must be under 256 characters in length, must be unique, and the total number of subnets specified by ids and names must not exceed 10. 15.1.62. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer Description networkLoadBalancerParameters holds configuration parameters for an AWS network load balancer. Present only if type is NLB. Type object Property Type Description eipAllocations array (string) eipAllocations is a list of IDs for Elastic IP (EIP) addresses that are assigned to the Network Load Balancer. The following restrictions apply: eipAllocations can only be used with external scope, not internal. An EIP can be allocated to only a single IngressController. The number of EIP allocations must match the number of subnets that are used for the load balancer. Each EIP allocation must be unique. A maximum of 10 EIP allocations are permitted. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html for general information about configuration, characteristics, and limitations of Elastic IP addresses. subnets object subnets specifies the subnets to which the load balancer will attach. The subnets may be specified by either their ID or name. The total number of subnets is limited to 10. In order for the load balancer to be provisioned with subnets, each subnet must exist, each subnet must be from a different availability zone, and the load balancer service must be recreated to pick up new values. When omitted from the spec, the subnets will be auto-discovered for each availability zone. Auto-discovered subnets are not reported in the status of the IngressController object. 15.1.63. .status.endpointPublishingStrategy.loadBalancer.providerParameters.aws.networkLoadBalancer.subnets Description subnets specifies the subnets to which the load balancer will attach. The subnets may be specified by either their ID or name. The total number of subnets is limited to 10. In order for the load balancer to be provisioned with subnets, each subnet must exist, each subnet must be from a different availability zone, and the load balancer service must be recreated to pick up new values. When omitted from the spec, the subnets will be auto-discovered for each availability zone. Auto-discovered subnets are not reported in the status of the IngressController object. Type object Property Type Description ids array (string) ids specifies a list of AWS subnets by subnet ID. Subnet IDs must start with "subnet-", consist only of alphanumeric characters, must be exactly 24 characters long, must be unique, and the total number of subnets specified by ids and names must not exceed 10. names array (string) names specifies a list of AWS subnets by subnet name. Subnet names must not start with "subnet-", must not include commas, must be under 256 characters in length, must be unique, and the total number of subnets specified by ids and names must not exceed 10. 15.1.64. .status.endpointPublishingStrategy.loadBalancer.providerParameters.gcp Description gcp provides configuration settings that are specific to GCP load balancers. If empty, defaults will be applied. See specific gcp fields for details about their defaults. Type object Property Type Description clientAccess string clientAccess describes how client access is restricted for internal load balancers. Valid values are: * "Global": Specifying an internal load balancer with Global client access allows clients from any region within the VPC to communicate with the load balancer. https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#global_access * "Local": Specifying an internal load balancer with Local client access means only clients within the same region (and VPC) as the GCP load balancer can communicate with the load balancer. Note that this is the default behavior. https://cloud.google.com/load-balancing/docs/internal#client_access 15.1.65. .status.endpointPublishingStrategy.loadBalancer.providerParameters.ibm Description ibm provides configuration settings that are specific to IBM Cloud load balancers. If empty, defaults will be applied. See specific ibm fields for details about their defaults. Type object Property Type Description protocol string protocol specifies whether the load balancer uses PROXY protocol to forward connections to the IngressController. See "service.kubernetes.io/ibm-load-balancer-cloud-provider-enable-features: "proxy-protocol"" at https://cloud.ibm.com/docs/containers?topic=containers-vpc-lbaas PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. Valid values for protocol are TCP, PROXY and omitted. When omitted, this means no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is TCP, without the proxy protocol enabled. 15.1.66. .status.endpointPublishingStrategy.nodePort Description nodePort holds parameters for the NodePortService endpoint publishing strategy. Present only if type is NodePortService. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.67. .status.endpointPublishingStrategy.private Description private holds parameters for the Private endpoint publishing strategy. Present only if type is Private. Type object Property Type Description protocol string protocol specifies whether the IngressController expects incoming connections to use plain TCP or whether the IngressController expects PROXY protocol. PROXY protocol can be used with load balancers that support it to communicate the source addresses of client connections when forwarding those connections to the IngressController. Using PROXY protocol enables the IngressController to report those source addresses instead of reporting the load balancer's address in HTTP headers and logs. Note that enabling PROXY protocol on the IngressController will cause connections to fail if you are not using a load balancer that uses PROXY protocol to forward connections to the IngressController. See http://www.haproxy.org/download/2.2/doc/proxy-protocol.txt for information about PROXY protocol. The following values are valid for this field: * The empty string. * "TCP". * "PROXY". The empty string specifies the default, which is TCP without PROXY protocol. Note that the default is subject to change. 15.1.68. .status.namespaceSelector Description namespaceSelector is the actual namespaceSelector in use. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.69. .status.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.70. .status.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.71. .status.routeSelector Description routeSelector is the actual routeSelector in use. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 15.1.72. .status.routeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 15.1.73. .status.routeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 15.1.74. .status.tlsProfile Description tlsProfile is the TLS connection configuration that is in effect. Type object Property Type Description ciphers array (string) ciphers is used to specify the cipher algorithms that are negotiated during the TLS handshake. Operators may remove entries their operands do not support. For example, to use DES-CBC3-SHA (yaml): ciphers: - DES-CBC3-SHA minTLSVersion string minTLSVersion is used to specify the minimal version of the TLS protocol that is negotiated during the TLS handshake. For example, to use TLS versions 1.1, 1.2 and 1.3 (yaml): minTLSVersion: VersionTLS11 NOTE: currently the highest minTLSVersion allowed is VersionTLS12 15.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/ingresscontrollers GET : list objects of kind IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers DELETE : delete collection of IngressController GET : list objects of kind IngressController POST : create an IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name} DELETE : delete an IngressController GET : read the specified IngressController PATCH : partially update the specified IngressController PUT : replace the specified IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/scale GET : read scale of the specified IngressController PATCH : partially update scale of the specified IngressController PUT : replace scale of the specified IngressController /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/status GET : read status of the specified IngressController PATCH : partially update status of the specified IngressController PUT : replace status of the specified IngressController 15.2.1. /apis/operator.openshift.io/v1/ingresscontrollers HTTP method GET Description list objects of kind IngressController Table 15.1. HTTP responses HTTP code Reponse body 200 - OK IngressControllerList schema 401 - Unauthorized Empty 15.2.2. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers HTTP method DELETE Description delete collection of IngressController Table 15.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind IngressController Table 15.3. HTTP responses HTTP code Reponse body 200 - OK IngressControllerList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressController Table 15.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.5. Body parameters Parameter Type Description body IngressController schema Table 15.6. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 202 - Accepted IngressController schema 401 - Unauthorized Empty 15.2.3. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name} Table 15.7. Global path parameters Parameter Type Description name string name of the IngressController HTTP method DELETE Description delete an IngressController Table 15.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 15.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressController Table 15.10. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressController Table 15.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.12. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressController Table 15.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.14. Body parameters Parameter Type Description body IngressController schema Table 15.15. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 401 - Unauthorized Empty 15.2.4. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/scale Table 15.16. Global path parameters Parameter Type Description name string name of the IngressController HTTP method GET Description read scale of the specified IngressController Table 15.17. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified IngressController Table 15.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.19. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified IngressController Table 15.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.21. Body parameters Parameter Type Description body Scale schema Table 15.22. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 15.2.5. /apis/operator.openshift.io/v1/namespaces/{namespace}/ingresscontrollers/{name}/status Table 15.23. Global path parameters Parameter Type Description name string name of the IngressController HTTP method GET Description read status of the specified IngressController Table 15.24. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified IngressController Table 15.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.26. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified IngressController Table 15.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.28. Body parameters Parameter Type Description body IngressController schema Table 15.29. HTTP responses HTTP code Reponse body 200 - OK IngressController schema 201 - Created IngressController schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/ingresscontroller-operator-openshift-io-v1
|
Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation
|
Appendix A. Configuring a Local Repository for Offline Red Hat Virtualization Manager Installation To install Red Hat Virtualization Manager on a system that does not have a direct connection to the Content Delivery Network, download the required packages on a system that has Internet access, then create a repository that can be shared with the offline Manager machine. The system hosting the repository must be connected to the same network as the client systems where the packages are to be installed. Prerequisites A Red Hat Enterprise Linux 7 Server installed on a system that has access to the Content Delivery Network. This system downloads all the required packages, and distributes them to your offline system(s). A large amount of free disk space available. This procedure downloads a large number of packages, and requires up to 50GB of free disk space. Enable the Red Hat Virtualization Manager repositories on the online system: Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Configuring the Offline Repository Servers that are not connected to the Internet can access software repositories on other systems using File Transfer Protocol (FTP). To create the FTP repository, install and configure vsftpd : Install the vsftpd package: Start the vsftpd service, and ensure the service starts on boot: Create a sub-directory inside the /var/ftp/pub/ directory. This is where the downloaded packages will be made available: Download packages from all configured software repositories to the rhvrepo directory. This includes repositories for all Content Delivery Network subscription pools attached to the system, and any locally configured repositories: This command downloads a large number of packages, and takes a long time to complete. The -l option enables yum plug-in support. Install the createrepo package: Create repository metadata for each of the sub-directories where packages were downloaded under /var/ftp/pub/rhvrepo : Create a repository file, and copy it to the /etc/yum.repos.d/ directory on the offline machine on which you will install the Manager. The configuration file can be created manually or with a script. Run the script below on the system hosting the repository, replacing ADDRESS in the baseurl with the IP address or FQDN of the system hosting the repository: #!/bin/sh REPOFILE="/etc/yum.repos.d/rhev.repo" echo -e " " > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e "[USD(basename USDDIR)]" >> USDREPOFILE echo -e "name=USD(basename USDDIR)" >> USDREPOFILE echo -e "baseurl=ftp://_ADDRESS_/pub/rhvrepo/`basename USDDIR`" >> USDREPOFILE echo -e "enabled=1" >> USDREPOFILE echo -e "gpgcheck=0" >> USDREPOFILE echo -e "\n" >> USDREPOFILE done Return to Section 3.3, "Installing and Configuring the Red Hat Virtualization Manager" . Packages are installed from the local repository, instead of from the Content Delivery Network.
|
[
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-supplementary-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=rhel-7-server-rhv-4-manager-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"yum install vsftpd",
"systemctl start vsftpd.service systemctl enable vsftpd.service",
"mkdir /var/ftp/pub/rhvrepo",
"reposync -l -p /var/ftp/pub/rhvrepo",
"yum install createrepo",
"for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do createrepo USDDIR; done",
"#!/bin/sh REPOFILE=\"/etc/yum.repos.d/rhev.repo\" echo -e \" \" > USDREPOFILE for DIR in USD(find /var/ftp/pub/rhvrepo -maxdepth 1 -mindepth 1 -type d); do echo -e \"[USD(basename USDDIR)]\" >> USDREPOFILE echo -e \"name=USD(basename USDDIR)\" >> USDREPOFILE echo -e \"baseurl=ftp://_ADDRESS_/pub/rhvrepo/`basename USDDIR`\" >> USDREPOFILE echo -e \"enabled=1\" >> USDREPOFILE echo -e \"gpgcheck=0\" >> USDREPOFILE echo -e \"\\n\" >> USDREPOFILE done"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/configuring_an_offline_repository_for_red_hat_virtualization_manager_installation_sm_localdb_deploy
|
Chapter 2. About migrating from OpenShift Container Platform 3 to 4
|
Chapter 2. About migrating from OpenShift Container Platform 3 to 4 OpenShift Container Platform 4 contains new technologies and functionality that result in a cluster that is self-managing, flexible, and automated. OpenShift Container Platform 4 clusters are deployed and managed very differently from OpenShift Container Platform 3. The most effective way to migrate from OpenShift Container Platform 3 to 4 is by using a CI/CD pipeline to automate deployments in an application lifecycle management framework. If you do not have a CI/CD pipeline or if you are migrating stateful applications, you can use the Migration Toolkit for Containers (MTC) to migrate your application workloads. You can use Red Hat Advanced Cluster Management for Kubernetes to help you import and manage your OpenShift Container Platform 3 clusters easily, enforce policies, and redeploy your applications. Take advantage of the free subscription to use Red Hat Advanced Cluster Management to simplify your migration process. To successfully transition to OpenShift Container Platform 4, review the following information: Differences between OpenShift Container Platform 3 and 4 Architecture Installation and upgrade Storage, network, logging, security, and monitoring considerations About the Migration Toolkit for Containers Workflow File system and snapshot copy methods for persistent volumes (PVs) Direct volume migration Direct image migration Advanced migration options Automating your migration with migration hooks Using the MTC API Excluding resources from a migration plan Configuring the MigrationController custom resource for large-scale migrations Enabling automatic PV resizing for direct volume migration Enabling cached Kubernetes clients for improved performance For new features and enhancements, technical changes, and known issues, see the MTC release notes .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migrating_from_version_3_to_4/about-migrating-from-3-to-4
|
Installing on IBM Cloud VPC
|
Installing on IBM Cloud VPC OpenShift Container Platform 4.13 Installing OpenShift Container Platform IBM Cloud Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_cloud_vpc/index
|
Chapter 15. Load Balancer (octavia) Parameters
|
Chapter 15. Load Balancer (octavia) Parameters Parameter Description OctaviaAdminLogFacility The syslog "LOG_LOCAL" facility to use for the administrative log messages. The default value is 1 . OctaviaAdminLogTargets List of syslog endpoints, host:port comma separated list, to receive administrative log messages. OctaviaAmphoraExpiryAge The interval in seconds after which an unused Amphora will be considered expired and cleaned up. If left to 0, the configuration will not be set and the system will use the service defaults. The default value is 0 . OctaviaAmphoraSshKeyDir OpenStack Load Balancing-as-a-Service (octavia) generated SSH key directory. The default value is /etc/octavia/ssh . OctaviaAmphoraSshKeyFile Public key file path. User will be able to SSH into amphorae with the provided key. User may, in most cases, also elevate to root from user centos (CentOS), ubuntu (Ubuntu) or cloud-user (RHEL) (depends on how amphora image was created). Logging in to amphorae provides a convenient way to e.g. debug load balancing services. OctaviaAmphoraSshKeyName SSH key name. The default value is octavia-ssh-key . OctaviaAntiAffinity Flag to indicate if anti-affinity feature is turned on. The default value is true . OctaviaCaCert OpenStack Load Balancing-as-a-Service (octavia) CA certificate data. If provided, this will create or update a file on the host with the path provided in OctaviaCaCertFile with the certificate data. OctaviaCaKey The private key for the certificate provided in OctaviaCaCert. If provided, this will create or update a file on the host with the path provided in OctaviaCaKeyFile with the key data. OctaviaCaKeyPassphrase CA private key passphrase. OctaviaClientCert OpenStack Load Balancing-as-a-Service (octavia) client certificate data. If provided, this will create or update a file on the host with the path provided in OctaviaClientCertFile with the certificate data. OctaviaConnectionLogging When false, tenant connection flows will not be logged. The default value is true . OctaviaDefaultListenerCiphers Default list of OpenSSL ciphers for new TLS-enabled listeners. The default value is TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256 . OctaviaDefaultPoolCiphers Default list of OpenSSL ciphers for new TLS-enabled pools. The default value is TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256 . OctaviaDisableLocalLogStorage When true, logs will not be stored on the amphora filesystem. This includes all kernel, system, and security logs. The default value is false . OctaviaEnableDriverAgent Set to false if the driver agent needs to be disabled for some reason. The default value is true . OctaviaFlavorId OpenStack Compute (nova) flavor ID to be used when creating the nova flavor for amphora. The default value is 65 . OctaviaForwardAllLogs When true, all log messages from the amphora will be forwarded to the administrative log endponts, including non-load balancing related logs. The default value is false . OctaviaGenerateCerts Enable internal generation of certificates for secure communication with amphorae for isolated private clouds or systems where security is not a concern. Otherwise, use OctaviaCaCert, OctaviaCaKey, OctaviaCaKeyPassphrase, OctaviaClientCert and OctaviaServerCertsKeyPassphrase to configure OpenStack Load Balancing-as-a-Service (octavia). The default value is false . OctaviaListenerTlsVersions List of OpenSSL cipher string of TLS versions to use for new TLS-enabled listeners. The default value is ['TLSv1.2', 'TLSv1.3'] . OctaviaLoadBalancerTopology Load balancer topology configuration. OctaviaLogOffload When true, log messages from the amphora will be forwarded to the administrative log endponts and will be stored with the controller logs. The default value is false . OctaviaMinimumTlsVersion Minimum allowed TLS version for listeners and pools. OctaviaPoolTlsVersions List of TLS versions to use for new TLS-enabled pools. The default value is ['TLSv1.2', 'TLSv1.3'] . OctaviaTenantLogFacility The syslog "LOG_LOCAL" facility to use for the tenant traffic flow log messages. The default value is 0 . OctaviaTenantLogTargets List of syslog endpoints, host:port comma separated list, to receive tenant traffic flow log messages. OctaviaTimeoutClientData Frontend client inactivity timeout. The default value is 50000 . OctaviaTimeoutMemberData Backend member inactivity timeout. The default value is 50000 . OctaviaTlsCiphersProhibitList List of OpenSSL ciphers. Usage of these ciphers will be blocked.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/ref_load-balancer-octavia-parameters_overcloud_parameters
|
Chapter 9. Adding RHEL compute machines to an OpenShift Container Platform cluster
|
Chapter 9. Adding RHEL compute machines to an OpenShift Container Platform cluster In OpenShift Container Platform, you can add Red Hat Enterprise Linux (RHEL) compute machines to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster on the x86_64 architecture. You can use RHEL as the operating system only on compute machines. 9.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.11, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. You must add any RHEL compute machines to the cluster after you initialize the control plane. 9.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 8.6 and later with "Minimal" installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. Additional resources Deleting nodes 9.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 9.3. Preparing an image for your cloud Amazon Machine Images (AMI) are required because various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You will need a valid AMI ID so that the correct RHEL version needed for the compute machines is selected. 9.3.1. Listing latest available RHEL images on AWS AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs. Prerequisites You have installed the AWS CLI. Procedure Use this command to list RHEL 8.4 Amazon Machine Images (AMI): USD aws ec2 describe-images --owners 309956199498 \ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2 --filters "Name=name,Values=RHEL-8.4*" \ 3 --region us-east-1 \ 4 --output table 5 1 The --owners command option shows Red Hat images based on the account ID 309956199498 . Important This account ID is required to display AMI IDs for images that are provided by Red Hat. 2 The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' . In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. 3 The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-8.4*" , then RHEL 8.4 AMIs are shown. 4 The --region command option sets where the region where an AMI is stored. 5 The --output command option sets how the results are displayed. Note When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 8.4 or 8.5. Example output ------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+ Additional resources You may also manually import RHEL images to AWS . 9.4. Preparing the machine to run the playbook Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.11 cluster, you must prepare a RHEL 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. Prerequisites Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Log in as a user with cluster-admin permission. Procedure Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the RHEL 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. Important If you use SSH key-based authentication, you must manage the key with an SSH agent. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it: Register the machine with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.11: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.11-for-rhel-8-x86_64-rpms" Install the required packages, including openshift-ansible : # yum install openshift-ansible openshift-clients jq The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line. 9.5. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. Ensure NetworkManager is enabled and configured to control all interfaces on the host. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.11: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.11-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 9.6. Attaching the role permissions to RHEL instance in AWS Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. Procedure From the AWS IAM console, create your desired IAM role . Attach the IAM role to the desired worker node. Additional resources See Required AWS permissions for IAM roles . 9.7. Tagging a RHEL worker node as owned or shared A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster. The owned tag value should be added if the resource should be destroyed as part of destroying the cluster. The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. Procedure With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared . Note Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. 9.8. Adding a RHEL compute machine to your cluster You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.11 cluster. Prerequisites You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. You prepared the RHEL hosts for installation. Procedure Perform the following steps on the machine that you prepared to run the playbook: Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables: 1 Specify the user name that runs the Ansible tasks on the remote compute machines. 2 If you do not specify root for the ansible_user , you must set ansible_become to True and assign the user sudo permissions. 3 Specify the path and file name of the kubeconfig file for your cluster. 4 List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 9.9. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 9.10. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. 9.10.1. Optional: Removing RHCOS compute machines from a cluster After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources. Prerequisites You have added RHEL compute machines to your cluster. Procedure View the list of machines and record the node names of the RHCOS compute machines: USD oc get nodes -o wide For each RHCOS compute machine, delete the node: Mark the node as unschedulable by running the oc adm cordon command: USD oc adm cordon <node_name> 1 1 Specify the node name of one of the RHCOS compute machines. Drain all the pods from the node: USD oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1 1 Specify the node name of the RHCOS compute machine that you isolated. Delete the node: USD oc delete nodes <node_name> 1 1 Specify the node name of the RHCOS compute machine that you drained. Review the list of compute machines to ensure that only the RHEL nodes remain: USD oc get nodes -o wide Remove the RHCOS machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines.
|
[
"aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5",
"------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.11-for-rhel-8-x86_64-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.11-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/machine_management/adding-rhel-compute
|
6.0 Technical Notes
|
6.0 Technical Notes Red Hat Enterprise Linux 6 Technical Release Documentation Red Hat Engineering Content Services
|
[
"attempt to access beyond end of device loop0: rw=0, want=248626, limit=248624",
"network --device eth0 --onboot yes --bootproto dhcp services --enabled=network"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/index
|
Chapter 1. Installing and running the IdM Healthcheck tool
|
Chapter 1. Installing and running the IdM Healthcheck tool Learn more about the IdM Healthcheck tool and how to install and run it. Note The Healthcheck tool is only available on RHEL 8.1 or later. 1.1. Healthcheck in IdM The Healthcheck tool in Identity Management (IdM) helps find issues that might impact the health of your IdM environment. Note The Healthcheck tool is a command line tool that can be used without Kerberos authentication. Modules are Independent Healthcheck consists of independent modules which test for: Replication issues Certificate validity Certificate Authority infrastructure issues IdM and Active Directory trust issues Correct file permissions and ownership settings Two output formats Healthcheck generates the following outputs, which you can set using the output-type option: json : Machine-readable output in JSON format (default) human : Human-readable output You can specify a different file destination with the --output-file option. Results Each Healthcheck module returns one of the following results: SUCCESS configured as expected WARNING not an error, but worth keeping an eye on or evaluating ERROR not configured as expected CRITICAL not configured as expected, with a high possibility for impact 1.2. Installing IdM Healthcheck You can install the IdM Healthcheck tool. Procedure Install the ipa-healthcheck package: Note On RHEL 8.1 and 8.2 systems, use the yum install /usr/bin/ipa-healthcheck command instead. Verification Use the --failures-only option to have ipa-healthcheck only report errors. A fully-functioning IdM installation returns an empty result of [] . Additional resources Use ipa-healthcheck --help to see all supported arguments. 1.3. Running IdM Healthcheck Healthcheck can be run manually or automatically using log rotation . Prerequisites The Healthcheck tool must be installed. See Installing IdM Healthcheck . Procedure To run healthcheck manually, enter the ipa-healthcheck command. Additional resources For all options, see the man page: man ipa-healthcheck . 1.4. Log rotation Log rotation creates a new log file every day, and the files are organized by date. Since log files are saved in the same directory, you can select a particular log file according to the date. Rotation means that there is configured a number for max number of log files and if the number is exceeded, the newest file rewrites and renames the oldest one. For example, if the rotation number is 30, the thirty-first log file replaces the first (oldest) one. Log rotation reduces voluminous log files and organizes them, which can help with analysis of the logs. 1.5. Configuring log rotation using the IdM Healthcheck Follow this procedure to configure a log rotation with: The systemd timer The crond service The systemd timer runs the Healthcheck tool periodically and generates the logs. The default value is set to 4 a.m. every day. The crond service is used for log rotation. The default log name is healthcheck.log and the rotated logs use the healthcheck.log-YYYYMMDD format. Prerequisites You must execute commands as root. Procedure Enable a systemd timer: Start the systemd timer: Open the /etc/logrotate.d/ipahealthcheck file to configure the number of logs which should be saved. By default, log rotation is set up for 30 days. In the /etc/logrotate.d/ipahealthcheck file, configure the path to the logs. By default, logs are saved in the /var/log/ipa/healthcheck/ directory. In the /etc/logrotate.d/ipahealthcheck file, configure the time for log generation. By default, a log is created daily at 4 a.m. To use log rotation, ensure that the crond service is enabled and running: To start with generating logs, start the IPA healthcheck service: To verify the result, go to /var/log/ipa/healthcheck/ and check if logs are created correctly. 1.6. Changing IdM Healthcheck configuration You can change Healthcheck settings by adding the desired command line options to the /etc/ipahealthcheck/ipahealthcheck.conf file. This can be useful when, for example, you configured a log rotation and want to ensure the logs are in a format suitable for automatic analysis, but do not want to set up a new timer. Note This Healthcheck feature is only available on RHEL 8.7 and newer. After the modification, all logs that Healthcheck creates follow the new settings. These settings also apply to any manual execution of Healthcheck. Note When running Healthcheck manually, settings in the configuration file take precedence over options specified in the command line. For example, if output_type is set to human in the configuration file, specifying json on the command line has no effect. Any command line options you use that are not specified in the configuration file are applied normally. Additional resources Configuring log rotation using the IdM Healthcheck 1.7. Configuring Healthcheck to change the output logs format Follow this procedure to configure Healthcheck with a timer already set up. In this example, you configure Healthcheck to produce logs in a human-readable format and to also include successful results instead of only errors. Prerequisites Your system is running RHEL 8.7 or later. You have root privileges. You have previously configured log rotation on a timer. Procedure Open the /etc/ipahealthcheck/ipahealthcheck.conf file in a text editor. Add options output_type=human and all=True to the [default] section. Save and close the file. Verification Run Healthcheck manually: Go to /var/log/ipa/healthcheck/ and check that the logs are in the correct format. Additional resources Configuring log rotation using the IdM Healthcheck 1.8. Additional resources See the following sections of the Configuring and managing Identity Management guide for examples of using IdM Healthcheck. Checking services Verifying your IdM and AD trust configuration Verifying certificates Verifying system certificates Checking disk space Verifying permissions of IdM configuration files Checking replication You can also see those chapters organized into a single guide: Using IdM Healthcheck to monitor your IdM environment
|
[
"yum install ipa-healthcheck",
"ipa-healthcheck --failures-only []",
"ipa-healthcheck",
"systemctl enable ipa-healthcheck.timer Created symlink /etc/systemd/system/multi-user.target.wants/ipa-healthcheck.timer -> /usr/lib/systemd/system/ipa-healthcheck.timer.",
"systemctl start ipa-healthcheck.timer",
"systemctl enable crond systemctl start crond",
"systemctl start ipa-healthcheck",
"ipa-healthcheck"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_idm_healthcheck_to_monitor_your_idm_environment/installing-and-running-the-ipa-healthcheck-tool_using-idm-healthcheck-to-monitor-your-idm-environment
|
Chapter 57. Desktop
|
Chapter 57. Desktop Cannot install downloaded RPM files from Nautilus The yum backend to PackageKit does not support getting details about local files. As a consequence, when an RPM file is double clicked in the Nautilus file manger, the file is not installed, and the following error message is returned: To work around this problem, either install the gnome-packagekit package to handle the double-click action, or manually install the files using the yum utility. (BZ# 1434477 ) Caps Lock LED status When using an UTF-8 keymap, even though the caps lock function works properly, the caps lock LED is not updated while in TTY mode. For the LED to be correctly updated, starting from Red Hat Enterprise Linux 7.5, the administrator needs to create the /etc/udev/rules.d/99-kbd.rules configuration file as follows: To reload the new udev rule, run these commands: After this change, when pressing the caps lock key, caps lock LED changes its status as expected. (BZ# 1470932 , BZ#1256895) Inconsistent GNOME Shell versions The GNOME desktop environment currently displays different versions of GNOME Shell . For example, the version returned by the gnome-shell --version command is different from the version found in the Details section of Settings . (BZ# 1511454 ) Uninstall the 32-bit version of flatpak Users are advised to uninstall the 32-bit version of the flatpak packages before updating to Red Hat Enterprise Linux 7.5 to prevent possible multilib conflicts. (BZ#1512940) GNOME downgrade does not work With the new version of GNOME (3.22) introduced in Red Hat Enterprise Linux 7.4, downgrading GNOME from version 3.22 to 3.14 using the yum downgrade or dnf downgrade commands is no longer possible. The only workaround lies in replacing the GNOME-related packages with their old versions. If you decide to downgrade manually, read the GNOME 3.16-3.22 release notes to find which functionalities you are losing. (BZ# 1451876 ) Wayland ignores keyboard grabs issued by X11 applications, such as virtual machines viewers Currently, when running through the XWayland server, graphical clients that rely on the X11 software, such as remote desktop viewers or virtual machine managers, are unable to obtain the system keyboard shortcuts for their own use. As a consequence, activating these shortcuts in a guest window, such as a virt-manager guest display, affects the local desktop instead of the guest. To work around the problem, use a Wayland native client with support for Wayland shortcuts inhibitor protocol, or switch back to the default GNOME session on X11 to run the X11 clients that require system keyboard shortcuts. Note that Wayland is available as a Technology Preview. (BZ# 1500397 ) Superuser should not run graphical sessions Opening a graphical session for the root user causes various bugs. The reason is that a graphical session is not meant to be used by superuser as it can cause serious and unexpected issues, is non-secure, and is against Unix principles. (BZ#1539772) Keyboard not working in VM browsed by remote-viewer and virt-viewer When run inside a Wayland session, remote-viewer and virt-viewer utilities do not recognize key events in a virtual machine. Moreover, Xwayland reports the following error: (BZ# 1540056 ) gnome-system-log does not work on Wayland Currently, when logged in a Wayland session, the root user is not allowed to access the user's Xwayland display. As a consequence, running the gnome-system-log utility in terminal does not display system log files. To work around this problem, run the following xhost server access control program as follows: (BZ# 1537529 ) GUI screen is shown incorrectly The X driver for Emulex Pilot2 and Pilot3 cards contains a bug when running at depth of color 16. This bug makes the graphics display unusable at this depth. To make the display usable in some configurations, use 24 bpp image format. Alternatively, disable the shadow framebuffer abstraction layer in the xorg.conf file by using the ShadowFB off option. Note that disabling the shadow frambuffer may have significant performance impact. (BZ#1499129) xrandr fails to provide some video modes Different video drivers for X11 have different heuristics for adding display resolutions. In particular, the Intel and generic modesetting drivers provide different sets of video modes for some laptop displays. Consequently, some non-native video modes may not be available in all configurations. To work around this problem, use a different video driver, or add resolutions to the output manually using the xrandr(1) command-line utility. (BZ#1478625) radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this bug, blacklist radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the machine and kdump . After starting kdump , the force_rebuild 1 line may be removed from the configuration file. Note that in this scenario, no graphics will be available during kdump , but kdump will complete successfully. (BZ#1509444) nouveau fails to load Nvidia secboot firmware In some Dell Coffeelake systems, the nouveau kernel module fails to load Nvidia secboot firmware for the pascal cards. As a consequence, Nvidia GPU on these systems occasionally does not work, and some of the Display ports on the system thus do not work as well. If this bug causes trouble booting, blacklist nouveau to mitigate the problem. Note that this, however, will not make non-functional ports on the machine work correctly. (BZ#1535168) Xchat status icon disappears from Top Icons panel The Xchat status icon indicating incoming personal messages disappears from top icons panel after suspending the system and resuming it again. Top icons installed using Gnome Software preserve the suspend mode and do not disappear from the panel. (BZ#1544840) GDM does not activate hotplugged monitors When a machine is booted without a monitor connected, the GNOME Display Manager (GDM) screen remains deactivated when a monitor is plugged in. As a workaround, kill GDM while the monitor is plugged in by running: Alternatively, use the xrandr utility to activate the monitor. (BZ# 1497303 ) Wacom Expresskeys Remote not detected as tablet The gnome-shell and control-center utilities do not detect unpaired Wacom Expresskeys Remote devices (EKRs). As a consequence, within the Wacom settings, there is no way to map the buttons on the EKR . Currently, EKR works only when it is paired to a tablet with a built-in pad. (BZ# 1543631 ) Synaptics dependency removes xorg-x11-drivers Later releases of Red Hat Enterprise Linux 7 contain the xorg-x11-drv-libinput driver for X, which can potentially provide a superior experience for some input devices. Users attempting to switch to xorg-x11-drv-libinput can try removing the xorg-x11-drv-synaptics driver, which is required by the xorg-x11-drivers package. However, removing synaptics requires removing xorg-x11-drivers . To work around this issue, remove xorg-x11-drivers . This package exists only to install a reasonable collection of drivers at system setup time, and removing it has no runtime impact. Any X driver already installed will be updated as expected. (BZ# 1516970 ) T470s docking station jack does not work on resume After suspending and resuming ThinkPad T470s connected to the docking station with analog audio input or output, the user does not receive any output sound. This problem does not affect the analog audio input or output in the ThinkPad laptop. (BZ#1548055) Screen occasionally turns off when xrandr is executed With the Nouveau driver, RANDR operations combined with heavy 3D load, such as querying the screen resolution, may cause screen flickering. Flickering can be avoided by minimizing concurrent 3D and RANDR operations. Hence, query or resize the screen while 3D usage is minimal. (BZ# 1545550 ) HDMI and DP for 8th generation Intel Core processors not enumerating sound inputs In Red Hat Enterprise Linux, support for alpha status hardware is disabled in the i915 driver by default. which causes that i915 never binds to the audio driver. As a consequence, HDMI and DP video and audio standards for 8th generation Intel Core processors do not enumerate sound inputs. To work around this issue, boot your system with the i915.alpha_support=1 line added to the kernel command line. (BZ#1540643) Tray icons are non-responsive for auto-started applications The GNOME Shell TopIcons extension, which shows legacy tray icons on the top of the screen, does not work for auto-started applications: the tray icons are non-responsive. This bug does not include applications started after the GNOME Session starts. As a workaround, follow this short procedure to restart the GNOME session: 1. press Alt + F2 , 2. type r , 3. press Enter . (BZ# 1550115 ) Inconsistent panel color on login screen When logging to a GNOME Classic session, suspending the laptop and resuming it again, the top panel on login screen is white, instead of black. This problem does not affect GNOME Classic functionality. (BZ# 1541021 ) Additional displays are mirrored after attaching a VM guest When opening a guest VM monitor and enabling an additional display from the remote-viewer menu, the content of the first display is mirrored to the newly attached one. As a workaround, resize the remote-viewer frame of any display. The desktop environment will be extended to both displays and guest displays will be properly rearranged. (BZ#1539686)
|
[
"Sorry, this did not work, File is not supported",
"ACTION==\"add\", SUBSYSTEM==\"leds\", ENV{DEVPATH}==\"*/input*::capslock\", ATTR{trigger}=\"kbd-ctrlllock\"",
"udevadm control --reload-rules udevadm trigger",
"send_key: assertion 'scancode != 0'",
"xhost +si:localuser:root",
"dracut_args --omit-drivers \"radeon\" force_rebuild 1",
"systemctl restart gdm.service"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/known_issues_desktop
|
7.83. hwdata
|
7.83. hwdata 7.83.1. RHEA-2013:0376 - hwdata enhancement update An updated hwdata package that adds various enhancements is now available for Red Hat Enterprise Linux 6. The hwdata package contains tools for accessing and displaying hardware identification and configuration data. Enhancements BZ# 839221 The PCI ID numbers have been updated for the Beta and the Final compose lists. BZ#739816 Support for NVidia graphic card N14E-Q5, 0x11BC has been added. BZ#739819 Support for NVidia graphic card N14E-Q3, 0x11BD has been added. BZ#739821 Support for NVidia graphic card N14E-Q1, 0x11BE has been added. BZ#739824 Support for NVidia graphic card N14P-Q3, 0x0FFB has been added. BZ#739825 Support for NVidia graphic card N14P-Q1, 0x0FFC has been added. BZ#760031 Support for Broadcom BCM943228HM4L 802.11a/b/g/n 2x2 Wi-Fi Adapter has been added. BZ#830253 Support for Boot from Dell PowerEdge Express Flash PCIe SSD devices has been added. BZ#841423 Support for the Intel C228 chipset and a future Intel processor based on Socket H3 has been added. BZ#814114 This update also adds the current hardware USB IDs file from the upstream repository. This file provides support for Broadcom 20702 Bluetooth 4.0 Adapter Softsailing. All users of hwdata are advised to upgrade to this updated package, which adds these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/hwdata
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/upgrading_connected_red_hat_satellite_to_6.15/providing-feedback-on-red-hat-documentation_upgrading-connected
|
7.2. Installing the audit Packages
|
7.2. Installing the audit Packages In order to use the Audit system, you must have the audit packages installed on your system. The audit packages ( audit and audit-libs ) are installed by default on Red Hat Enterprise Linux 7. If you do not have these packages installed, execute the following command as the root user to install Audit and the dependencies:
|
[
"~]# yum install audit"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-installing_the_audit_packages
|
Chapter 5. Post-installation node tasks
|
Chapter 5. Post-installation node tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements through certain node tasks. 5.1. Adding RHEL compute machines to an OpenShift Container Platform cluster Understand and work with RHEL compute nodes. 5.1.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.9, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines, which are also known as worker machines, in your cluster if you use a user-provisioned infrastructure installation. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane, or master, machines in your cluster. As with all installations that use user-provisioned infrastructure, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Important Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. You must add any RHEL compute machines to the cluster after you initialize the control plane. 5.1.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute, or worker, machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 7.9 or RHEL 7.9 through 8.7 with "Minimal" installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is deprecated. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. In addition, you cannot upgrade your RHEL 7 compute machines to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Enabling FIPS Mode in the RHEL 7 documentation. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. Additional resources Deleting nodes 5.1.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 5.1.3. Preparing the machine to run the playbook Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.9 cluster, you must prepare a RHEL 7 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. Prerequisites Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Log in as a user with cluster-admin permission. Procedure Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the RHEL 7 machine. One way to accomplish this is to use the same machine that you used to install the cluster. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. Important If you use SSH key-based authentication, you must manage the key with an SSH agent. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it: Register the machine with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.9: # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ansible-2.9-rpms" \ --enable="rhel-7-server-ose-4.9-rpms" Install the required packages, including openshift-ansible : # yum install openshift-ansible openshift-clients jq The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line. 5.1.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.9. For RHEL 7 nodes, you must enable the following repositories: # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-fast-datapath-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-optional-rpms" \ --enable="rhel-7-server-ose-4.9-rpms" Note Use of RHEL 7 nodes is deprecated and planned for removal in a future release of OpenShift Container Platform 4. For RHEL 8 nodes, you must enable the following repositories: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.9-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 5.1.5. Adding a RHEL compute machine to your cluster You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.9 cluster. Prerequisites You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. You prepared the RHEL hosts for installation. Procedure Perform the following steps on the machine that you prepared to run the playbook: Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables: 1 Specify the user name that runs the Ansible tasks on the remote compute machines. 2 If you do not specify root for the ansible_user , you must set ansible_become to True and assign the user sudo permissions. 3 Specify the path and file name of the kubeconfig file for your cluster. 4 List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 5.1.6. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Paramter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. 5.1.7. Optional: Removing RHCOS compute machines from a cluster After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources. Prerequisites You have added RHEL compute machines to your cluster. Procedure View the list of machines and record the node names of the RHCOS compute machines: USD oc get nodes -o wide For each RHCOS compute machine, delete the node: Mark the node as unschedulable by running the oc adm cordon command: USD oc adm cordon <node_name> 1 1 Specify the node name of one of the RHCOS compute machines. Drain all the pods from the node: USD oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1 1 Specify the node name of the RHCOS compute machine that you isolated. Delete the node: USD oc delete nodes <node_name> 1 1 Specify the node name of the RHCOS compute machine that you drained. Review the list of compute machines to ensure that only the RHEL nodes remain: USD oc get nodes -o wide Remove the RHCOS machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines. 5.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster You can add more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to your OpenShift Container Platform cluster on bare metal. Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. 5.2.1. Prerequisites You installed a cluster on bare metal. You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . 5.2.2. Creating more RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Procedure Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 5.2.3. Creating more RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE: 1 Specify locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. 2 Specify the location of the initramfs file that you uploaded to your HTTP server. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 5.2.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.3. Deploying machine health checks Understand and deploy machine health checks. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 5.3.1. About machine health checks Machine health checks automatically repair unhealthy machines in a particular machine pool. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. Note You cannot apply a machine health check to a machine with the master role. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 5.3.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. Control plane machines are not currently supported and are not remediated if they are unhealthy. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . 5.3.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 5.3.2.1. Short-circuiting machine health check remediation Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 5.3.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 5.3.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 5.3.3. Creating a MachineHealthCheck resource You can create a MachineHealthCheck resource for all MachineSets in your cluster. You should not create a MachineHealthCheck resource that targets control plane machines. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml 5.3.4. Scaling a machine set manually To add or remove an instance of a machine in a machine set, you can manually scale the machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the machine sets that are in the cluster: USD oc get machinesets -n openshift-machine-api The machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the machines that are in the cluster: USD oc get machine -n openshift-machine-api Set the annotation on the machine that you want to delete: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true" Cordon and drain the node that you want to delete: USD oc adm cordon <node_name> USD oc adm drain <node_name> Scale the machine set: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the machine set up or down. It takes several minutes for the new machines to be available. Verification Verify the deletion of the intended machine: USD oc get machines 5.3.5. Understanding the difference between machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 5.4. Recommended node host practices The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. podsPerCore sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . podsPerCore cannot exceed maxPods . maxPods sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 5.4.1. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one KubeletConfig CR for all of the pools. Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-max-pods 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-max-pods Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. In this example, use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=large-pods Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-max-pods 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-max-pods -o yaml This should show a status of True and type:Success , as shown in the following example: spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 5.4.2. Modifying the number of unavailable worker nodes By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process. Procedure Edit the worker machine config pool: USD oc edit machineconfigpool worker Set maxUnavailable to the value that you want: spec: maxUnavailable: <node_count> Important When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster. 5.4.3. Control plane node sizing The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts: 12 image streams 3 build configurations 6 builds 1 deployment with 2 pod replicas mounting two secrets each 2 deployments with 1 pod replica mounting two secrets 3 services pointing to the deployments 3 routes pointing to the deployments 10 secrets, 2 of which are mounted by the deployments 10 config maps, 2 of which are mounted by the deployments Number of worker nodes Cluster load (namespaces) CPU cores Memory (GB) 25 500 4 16 100 1000 8 32 250 4000 16 96 On a large and dense cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails. The failures can be due to unexpected issues with power, network or underlying infrastructure in addition to intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available which leads to increase in the resource usage. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running phase. Operator Lifecycle Manager (OLM ) runs on the control plane nodes and it's memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.9 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Important The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin. Note In OpenShift Container Platform 4.9, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 5.4.4. Setting up CPU Manager Procedure Optional: Label a node: # oc label node perf-node.example.com cpumanager=true Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled: # oc edit machineconfigpool worker Add a label to the worker machine config pool: metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the worker for the updated kubelet.conf : # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verify that the pod is scheduled to the node that you labeled: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process: # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice . Pods of other QoS tiers end up in child cgroups of kubepods : # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "USDi "; cat USDi ; done Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod: # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 5.5. Huge pages Understand and configure huge pages. 5.5.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. 5.5.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . 5.5.3. Configuring huge pages Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. 5.5.3.1. At boot time Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages object USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Warning This functionality is currently only supported on Red Hat Enterprise Linux CoreOS (RHCOS) 8.x worker nodes. On Red Hat Enterprise Linux (RHEL) 7.x worker nodes the TuneD [bootloader] plugin is currently not supported. 5.6. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as reseting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 5.6.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 5.6.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 5.6.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 5.7. Taints and tolerations Understand and work with taints and tolerations. 5.7.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification spec: taints: - effect: NoExecute key: key1 value: value1 .... Example toleration in a Pod spec spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 .... Taints and tolerations consist of a key, value, and effect. Table 5.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c ... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master ... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 5.7.1.1. Understanding how to use toleration seconds to delay pod evictions You can specify how long a pod can remain bound to a node before being evicted by specifying the tolerationSeconds parameter in the Pod specification or MachineSet object. If a taint with the NoExecute effect is added to a node, a pod that does tolerate the taint, which has the tolerationSeconds parameter, the pod is not evicted until that time period expires. Example output spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted. 5.7.1.2. Understanding how to use multiple taints You can put multiple taints on the same node and multiple tolerations on the same pod. OpenShift Container Platform processes multiple taints and tolerations as follows: Process the taints for which the pod has a matching toleration. The remaining unmatched taints have the indicated effects on the pod: If there is at least one unmatched taint with effect NoSchedule , OpenShift Container Platform cannot schedule a pod onto that node. If there is no unmatched taint with effect NoSchedule but there is at least one unmatched taint with effect PreferNoSchedule , OpenShift Container Platform tries to not schedule the pod onto the node. If there is at least one unmatched taint with effect NoExecute , OpenShift Container Platform evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node. Pods that do not tolerate the taint are evicted immediately. Pods that tolerate the taint without specifying tolerationSeconds in their Pod specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. For example: Add the following taints to the node: USD oc adm taint nodes node1 key1=value1:NoSchedule USD oc adm taint nodes node1 key1=value1:NoExecute USD oc adm taint nodes node1 key2=value2:NoSchedule The pod has the following tolerations: spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. 5.7.1.3. Understanding pod scheduling and node conditions (taint node by condition) The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the NoSchedule effect, which means no pod can be scheduled on the node unless the pod has a matching toleration. The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations. To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons: node.kubernetes.io/memory-pressure node.kubernetes.io/disk-pressure node.kubernetes.io/unschedulable (1.10 or later) node.kubernetes.io/network-unavailable (host network only) You can also add arbitrary tolerations to daemon sets. Note The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class. This is because Kubernetes manages pods in the Guaranteed or Burstable QoS classes. The new BestEffort pods do not get scheduled onto the affected node. 5.7.1.4. Understanding evicting pods by condition (taint-based evictions) The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as not-ready and unreachable . When a node experiences one of these conditions, OpenShift Container Platform automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes. Taint Based Evictions have a NoExecute effect, where any pod that does not tolerate the taint is evicted immediately and any pod that does tolerate the taint will never be evicted, unless the pod uses the tolerationSeconds parameter. The tolerationSeconds parameter allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the tolerationSeconds period, the taint remains on the node and the pods with a matching toleration are evicted. If the condition clears before the tolerationSeconds period, pods with matching tolerations are not removed. If you use the tolerationSeconds parameter with no value, pods are never evicted because of the not ready and unreachable node conditions. Note OpenShift Container Platform evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. By default, if more than 55% of nodes in a given zone are unhealthy, the node lifecycle controller changes that zone's state to PartialDisruption and the rate of pod evictions is reduced. For small clusters (by default, 50 nodes or less) in this state, nodes in this zone are not tainted and evictions are stopped. For more information, see Rate limits on eviction in the Kubernetes documentation. OpenShift Container Platform automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , unless the Pod configuration specifies either toleration. spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 1 These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction. Pods spawned by a daemon set are created with NoExecute tolerations for the following taints with no tolerationSeconds : node.kubernetes.io/unreachable node.kubernetes.io/not-ready As a result, daemon set pods are never evicted because of these node conditions. 5.7.1.5. Tolerating all taints You can configure a pod to tolerate all taints by adding an operator: "Exists" toleration with no key and value parameters. Pods with this toleration are not removed from a node that has taints. Pod spec for tolerating all taints spec: tolerations: - operator: "Exists" 5.7.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c ... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master ... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 5.7.3. Adding taints and tolerations using a machine set You can add taints to nodes using a machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a machine set specification spec: .... template: .... spec: taints: - effect: NoExecute key: key1 value: value1 .... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 5.7.4. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: dedicated value: groupName effect: NoSchedule Add a toleration to the pods by writing a custom admission controller. 5.7.5. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule 5.7.6. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 5.8. Topology Manager Understand and work with Topology Manager. 5.8.1. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the cpumanager-enabled custom resource (CR): none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 5.8.2. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the cpumanager-enabled custom resource (CR). This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prequisites Configure the CPU Manager policy to be static . Procedure To activate Topololgy Manager: Configure the Topology Manager allocation policy in the cpumanager-enabled custom resource (CR). USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 5.8.3. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. 5.9. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted. 5.10. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. You must install the Cluster Resource Override Operator using the OpenShift Container Platform console or CLI as shown in the following sections. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project: apiVersion: v1 kind: Namespace metadata: .... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" .... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. 5.10.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create Instance . On the Create ClusterResourceOverride page, edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .... 1 Reference to the ClusterResourceOverride admission webhook. 5.10.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "4.9" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .... 1 Reference to the ClusterResourceOverride admission webhook. 5.10.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 ... 1 Add this label to each project. 5.11. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 5.11.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 5.11.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 5.11.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 5.11.2. Understanding overcomitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 5.2. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 5.11.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 5.11.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 5.11.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output vm.overcommit_memory = 1 USD sysctl -a |grep panic Example output vm.panic_on_oom = 0 Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 5.11.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: 3 - "true" 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to true . Run the following command to create the CR: USD oc create -f <file_name>.yaml 5.11.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 5.11.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 5.12. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 5.12.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Edit the project object file Add the following annotation: quota.openshift.io/cluster-resource-override-enabled: "false" Create the project object: USD oc create -f <file-name>.yaml 5.13. Freeing node resources using garbage collection Understand and use garbage collection. 5.13.1. Understanding how terminated containers are removed through garbage collection Container garbage collection can be performed using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 5.3. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the eviction-pressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. 5.13.2. Understanding how images are removed through garbage collection Image garbage collection relies on disk usage as reported by cAdvisor on the node to decide which images to remove from the node. The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 5.4. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 5.13.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 1 Name for the object. 2 Specify the label from the machine config pool. 3 Type of eviction: evictionSoft or evictionHard . 4 Eviction thresholds based on a specific eviction trigger signal. 5 Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 The duration to wait before transitioning out of an eviction pressure condition. 8 The minimum age for an unused image before the image is removed by garbage collection. 9 The percent of disk usage (expressed as an integer) that triggers image garbage collection. 10 The percent of disk usage (expressed as an integer) that image garbage collection attempts to free. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 5.14. Using the Node Tuning Operator Understand and use the Node Tuning Operator. The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. 5.14.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run: USD oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator. 5.14.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. 5.14.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: recommend: - profile: "openshift-control-plane" priority: 30 match: - label: "node-role.kubernetes.io/master" - label: "node-role.kubernetes.io/infra" - profile: "openshift-node" priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 5.14.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: bootloader script systemd See Available TuneD Plugins and Getting Started with TuneD for more information. 5.15. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False
|
[
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-ansible-2.9-rpms\" --enable=\"rhel-7-server-ose-4.9-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.9-rpms\"",
"subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.9-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.22.1 master-1 Ready master 63m v1.22.1 master-2 Ready master 64m v1.22.1",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.22.1 master-1 Ready master 73m v1.22.1 master-2 Ready master 74m v1.22.1 worker-0 Ready worker 11m v1.22.1 worker-1 Ready worker 11m v1.22.1",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"",
"oc adm cordon <node_name> oc adm drain <node_name>",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1",
"oc label machineconfigpool worker custom-kubelet=set-max-pods",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=large-pods",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-max-pods -o yaml",
"spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc edit machineconfigpool worker",
"spec: maxUnavailable: <node_count>",
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as reseting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"spec: taints: - effect: NoExecute key: key1 value: value1 .",
"spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 .",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master",
"spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\"",
"spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300",
"spec: tolerations: - operator: \"Exists\"",
"spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2",
"spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master",
"spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2",
"spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc edit machineset <machineset>",
"spec: . template: . spec: taints: - effect: NoExecute key: key1 value: value1 .",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: dedicated value: groupName effect: NoSchedule",
"spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: disktype value: ssd effect: PreferNoSchedule",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: . labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" .",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"4.9\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"vm.overcommit_memory = 1",
"sysctl -a |grep panic",
"vm.panic_on_oom = 0",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: 3 - \"true\"",
"oc create -f <file_name>.yaml",
"sysctl -w vm.overcommit_memory=0",
"quota.openshift.io/cluster-resource-override-enabled: \"false\"",
"oc create -f <file-name>.yaml",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10",
"oc create -f <file_name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4",
"oc create -f <file_name>.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/post-installation_configuration/post-install-node-tasks
|
Chapter 4. Setting up your container repository
|
Chapter 4. Setting up your container repository You can setup your container repository to add a description, include a README, add groups who can access the repository, and tag images. 4.1. Prerequisites You are logged in to a private automation hub with permissions to change the repository. 4.2. Adding a README to your container repository Add a README to your container repository to provide instructions to your users for how to work with the container. Automation hub container repositories support Markdown for creating a README. By default, the README will be empty. Prerequisites You have permissions to change containers. Procedure Navigate to Execution Environments . Select your container repository. On the Detail tab, click Add . In the Raw Markdown text field, enter your README text in Markdown. Click Save when you are finished. Once you add a README, you can edit it at any time by clicking Edit and repeating steps 4 and 5. 4.3. Providing access to your container repository Provide access to your container repository for users who need to work the images. Adding a group allows you to modify the permissions the group can have to the container repository. You can use this option to extend or restrict permissions based on what the group is assigned. Prerequisites You have change container namespace permissions. Procedure Navigate to Execution Environments . Select your container repository. Click Edit at the top right of your window. Under Groups with access , select a group or groups to grant access to. Optional: Add or remove permissions for a specific group using the drop down under that group name. Click Save . 4.4. Tagging container images Tag images to add an additional name to images stored in your automation hub container repository. If no tag is added to an image, automation hub defaults to latest for the name. Prerequisites You have change image tags permissions. Procedure Navigate to Execution Environments . Select your container repository. Click the Images tab. Click , then click Manage tags . Add a new tag in the text field and click Add . Optional: Remove current tags by clicking x on any of the tags for that image. Click Save . Verification Click the Activity tab and review the latest changes. 4.5. Creating a credential in automation controller Previously, you were required to deploy a registry to store execution environment images. On Ansible Automation Platform 2.0 and later, it is assumed that you already have a container registry up and running. Therefore, you are only required to add the credentials of a container registry of your choice to store execution environment images. To pull container images from a password or token-protected registry, create a credential in automation controller: Procedure Navigate to automation controller. In the side-menu bar, click Resources Credentials . Click Add to create a new credential. Enter an authorization Name , Description , and Organization . Select the Credential Type . Enter the Authentication URL . This is the container registry address. Enter the Username and Password or Token required to log in to the container registry. Optionally, select Verify SSL to enable SSL verification. Click Save .
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_containers_in_private_automation_hub/setting-up-container-repository
|
Support
|
Support OpenShift Container Platform 4.16 Getting support for OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/support/index
|
Chapter 16. Kerberos PKINIT authentication in IdM
|
Chapter 16. Kerberos PKINIT authentication in IdM Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) is a preauthentication mechanism for Kerberos. The Identity Management (IdM) server includes a mechanism for Kerberos PKINIT authentication. 16.1. Default PKINIT configuration The default PKINIT configuration on your IdM servers depends on the certificate authority (CA) configuration. Table 16.1. Default PKINIT configuration in IdM CA configuration PKINIT configuration Without a CA, no external PKINIT certificate provided Local PKINIT: IdM only uses PKINIT for internal purposes on servers. Without a CA, external PKINIT certificate provided to IdM IdM configures PKINIT by using the external Kerberos key distribution center (KDC) certificate and CA certificate. With an Integrated CA IdM configures PKINIT by using the certificate signed by the IdM CA. 16.2. Displaying the current PKINIT configuration IdM provides multiple commands you can use to query the PKINIT configuration in your domain. Procedure To determine the PKINIT status in your domain, use the ipa pkinit-status command: The command displays the PKINIT configuration status as enabled or disabled : enabled : PKINIT is configured using a certificate signed by the integrated IdM CA or an external PKINIT certificate. disabled : IdM only uses PKINIT for internal purposes on IdM servers. To list the IdM servers with active Kerberos key distribution centers (KDCs) that support PKINIT for IdM clients, use the ipa config-show command on any server: 16.3. Configuring PKINIT in IdM If your IdM servers are running with PKINIT disabled, use these steps to enable it. For example, a server is running with PKINIT disabled if you passed the --no-pkinit option with the ipa-server-install or ipa-replica-install utilities. Prerequisites Ensure that all IdM servers with a certificate authority (CA) installed are running on the same domain level. Procedure Check if PKINIT is enabled on the server: If PKINIT is disabled, you will see the following output: You can also use the command to find all the servers where PKINIT is enabled if you omit the --server <server_fqdn> parameter. If you are using IdM without CA: On the IdM server, install the CA certificate that signed the Kerberos key distribution center (KDC) certificate: To update all IPA hosts, repeat the ipa-certupdate command on all replicas and clients: Check if the CA certificate has already been added using the ipa-cacert-manage list command. For example: Use the ipa-server-certinstall utility to install an external KDC certificate. The KDC certificate must meet the following conditions: It is issued with the common name CN= fully_qualified_domain_name,certificate_subject_base . It includes the Kerberos principal krbtgt/ REALM_NAME@REALM_NAME . It contains the Object Identifier (OID) for KDC authentication: 1.3.6.1.5.2.3.5. See your PKINIT status: If you are using IdM with a CA certificate, enable PKINIT as follows: If you are using an IdM CA, the command requests a PKINIT KDC certificate from the CA. Additional resources ipa-server-certinstall(1) man page on your system 16.4. Additional resources For details on Kerberos PKINIT, PKINIT configuration in the MIT Kerberos Documentation.
|
[
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa config-show Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers [...output truncated...] IPA masters capable of PKINIT: server1.example.com [...output truncated...]",
"kinit admin Password for [email protected]: ipa pkinit-status --server=server.idm.example.com 1 server matched ---------------- Server name: server.idm.example.com PKINIT status:enabled ---------------------------- Number of entries returned 1 ----------------------------",
"ipa pkinit-status --server server.idm.example.com ----------------- 0 servers matched ----------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa-cacert-manage install -t CT,C,C ca.pem",
"ipa-certupdate",
"ipa-cacert-manage list CN=CA,O=Example Organization The ipa-cacert-manage command was successful",
"ipa-server-certinstall --kdc kdc.pem kdc.key systemctl restart krb5kdc.service",
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa-pkinit-manage enable Configuring Kerberos KDC (krb5kdc) [1/1]: installing X509 Certificate for PKINIT Done configuring Kerberos KDC (krb5kdc). The ipa-pkinit-manage command was successful"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/kerberos-pkinit-authentication-in-idm_managing-users-groups-hosts
|
Chapter 1. Overview
|
Chapter 1. Overview Installer-provisioned installation on bare metal nodes deploys and configures the infrastructure that an OpenShift Container Platform cluster runs on. This guide provides a methodology to achieving a successful installer-provisioned bare-metal installation. The following diagram illustrates the installation environment in phase 1 of deployment: For the installation, the key elements in the diagram are: Provisioner : A physical machine that runs the installation program and hosts the bootstrap VM that deploys the control plane of a new OpenShift Container Platform cluster. Bootstrap VM : A virtual machine used in the process of deploying an OpenShift Container Platform cluster. Network bridges : The bootstrap VM connects to the bare metal network and to the provisioning network, if present, via network bridges, eno1 and eno2 . API VIP : An API virtual IP address (VIP) is used to provide failover of the API server across the control plane nodes. The API VIP first resides on the bootstrap VM. A script generates the keepalived.conf configuration file before launching the service. The VIP moves to one of the control plane nodes after the bootstrap process has completed and the bootstrap VM stops. In phase 2 of the deployment, the provisioner destroys the bootstrap VM automatically and moves the virtual IP addresses (VIPs) to the appropriate nodes. The keepalived.conf file sets the control plane machines with a lower Virtual Router Redundancy Protocol (VRRP) priority than the bootstrap VM, which ensures that the API on the control plane machines is fully functional before the API VIP moves from the bootstrap VM to the control plane. Once the API VIP moves to one of the control plane nodes, traffic sent from external clients to the API VIP routes to an haproxy load balancer running on that control plane node. This instance of haproxy load balances the API VIP traffic across the control plane nodes. The Ingress VIP moves to the worker nodes. The keepalived instance also manages the Ingress VIP. The following diagram illustrates phase 2 of deployment: After this point, the node used by the provisioner can be removed or repurposed. From here, all additional provisioning tasks are carried out by the control plane. Note For installer-provisioned infrastructure installations, CoreDNS exposes port 53 at the node level, making it accessible from other routable networks. Additional resources Using DNS forwarding Important The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media baseboard management controller (BMC) addressing option such as redfish-virtualmedia or idrac-virtualmedia .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-overview
|
Chapter 15. Planning for Installation on IBM Z
|
Chapter 15. Planning for Installation on IBM Z 15.1. Pre-installation Red Hat Enterprise Linux 7 runs on zEnterprise 196 or later IBM mainframe systems. The installation process assumes that you are familiar with the IBM Z and can set up logical partitions (LPARs) and z/VM guest virtual machines. For additional information on IBM Z, see http://www.ibm.com/systems/z . For installation of Red Hat Enterprise Linux on IBM Z, Red Hat supports DASD (Direct Access Storage Device) and FCP (Fiber Channel Protocol) storage devices. Before you install Red Hat Enterprise Linux, you must decide on the following: Decide whether you want to run the operating system on an LPAR or as a z/VM guest operating system. Decide if you need swap space and if so, how much. Although it is possible (and recommended) to assign enough memory to a z/VM guest virtual machine and let z/VM do the necessary swapping, there are cases where the amount of required RAM is hard to predict. Such instances should be examined on a case-by-case basis. See Section 18.15.3.4, "Recommended Partitioning Scheme" . Decide on a network configuration. Red Hat Enterprise Linux 7 for IBM Z supports the following network devices: Real and virtual Open Systems Adapter (OSA) Real and virtual HiperSockets LAN channel station (LCS) for real OSA You require the following hardware: Disk space. Calculate how much disk space you need and allocate sufficient disk space on DASDs [2] or SCSI [3] disks. You require at least 10 GB for a server installation, and 20 GB if you want to install all packages. You also require disk space for any application data. After the installation, you can add or delete more DASD or SCSI disk partitions. The disk space used by the newly installed Red Hat Enterprise Linux system (the Linux instance) must be separate from the disk space used by other operating systems you have installed on your system. For more information about disks and partition configuration, see Section 18.15.3.4, "Recommended Partitioning Scheme" . RAM. Acquire 1 GB (recommended) for the Linux instance. With some tuning, an instance might run with as little as 512 MB RAM. Note When initializing swap space on an FBA ( Fixed Block Architecture ) DASD using the SWAPGEN utility, the FBAPART option must be used. [2] Direct Access Storage Devices (DASDs) are hard disks that allow a maximum of three partitions per device. For example, dasda can have partitions dasda1 , dasda2 , and dasda3 . [3] Using the SCSI-over-Fibre Channel device driver (the zfcp device driver) and a switch, SCSI LUNs can be presented to Linux on IBM Z as if they were locally attached SCSI drives.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-installation-planning-s390
|
Chapter 88. Header
|
Chapter 88. Header The Header Expression Language allows you to extract values of named headers. 88.1. Header Options The Header language supports 1 options, which are listed below. Name Default Java Type Description trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 88.2. Example usage The recipientList EIP can utilize a header: <route> <from uri="direct:a" /> <recipientList> <header>myHeader</header> </recipientList> </route> In this case, the list of recipients are contained in the header 'myHeader'. And the same example in Java DSL: from("direct:a").recipientList(header("myHeader")); 88.3. Dependencies The Header language is part of camel-core . 88.4. Spring Boot Auto-Configuration When using header with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
|
[
"<route> <from uri=\"direct:a\" /> <recipientList> <header>myHeader</header> </recipientList> </route>",
"from(\"direct:a\").recipientList(header(\"myHeader\"));",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-header-language-starter
|
Chapter 1. Introduction to Hammer
|
Chapter 1. Introduction to Hammer Hammer is a powerful command-line tool provided with Red Hat Satellite 6. You can use Hammer to configure and manage a Red Hat Satellite Server either through CLI commands or automation in shell scripts. Hammer also provides an interactive shell. Hammer compared to Satellite web UI Compared to navigating the web UI, using Hammer can result in much faster interaction with the Satellite Server, as common shell features such as environment variables and aliases are at your disposal. You can also incorporate Hammer commands into reusable scripts for automating tasks of various complexity. Output from Hammer commands can be redirected to other tools, which allows for integration with your existing environment. You can issue Hammer commands directly on the base operating system running Red Hat Satellite. Access to Satellite Server's base operating system is required to issue Hammer commands, which can limit the number of potential users compared to the web UI. Although the parity between Hammer and the web UI is almost complete, the web UI has development priority and can be ahead especially for newly introduced features. Hammer compared to Satellite API For many tasks, both Hammer and Satellite API are equally applicable. Hammer can be used as a human friendly interface to Satellite API, for example to test responses to API calls before applying them in a script (use the -d option to inspect API calls issued by Hammer, for example hammer -d organization list ). Changes in the API are automatically reflected in Hammer, while scripts using the API directly have to be updated manually. In the background, each Hammer command first establishes a binding to the API, then sends a request. This can have performance implications when executing a large number of Hammer commands in sequence. In contrast, a script communicating directly with the API establishes the binding only once. See the API Guide for more information. 1.1. Getting Help View the full list of hammer options and subcommands by executing: Use --help to inspect any subcommand, for example: You can search the help output using grep , or redirect it to a text viewer, for example: 1.2. Authentication A Satellite user must prove their identity to Red Hat Satellite when entering hammer commands. Hammer commands can be run manually or automatically. In either case, hammer requires Satellite credentials for authentication. There are three methods of hammer authentication: Hammer authentication session Storing credentials in the hammer configuration file Providing credentials with each hammer command The hammer configuration file method is recommended when running commands automatically. For example, running Satellite maintenance commands from a cron job. When running commands manually, Red Hat recommends using the hammer authentication session and providing credentials with each command. 1.2.1. Hammer Authentication Session The hammer authentication session is a cache that stores your credentials, and you have to provide them only once, at the beginning of the session. This method is suited to running several hammer commands in succession, for example a script containing hammer commands. In this scenario, you enter your Satellite credentials once, and the script runs as expected. By using the hammer authentication session, you avoid storing your credentials in the script itself and in the ~/.hammer/cli.modules.d/foreman.yml hammer configuration file. See the instructions on how to use the sessions: To enable sessions, add :use_sessions: true to the ~/.hammer/cli.modules.d/foreman.yml file: Note that if you enable sessions, credentials stored in the configuration file will be ignored. To start a session, enter the following command: You are prompted for your Satellite credentials, and logged in. You will not be prompted for the credentials again until your session expires. The default length of a session is 60 minutes. You can change the time to suit your preference. For example, to change it to 30 minutes, enter the following command: To see the current status of the session, enter the following command: To end the session, enter the following command: 1.2.2. Hammer Configuration File If you ran the Satellite installation with --foreman-initial-admin-username and --foreman-initial-admin-password options, credentials you entered are stored in the ~/.hammer/cli.modules.d/foreman.yml configuration file, and hammer does not prompt for your credentials. You can also add your credentials to the ~/.hammer/cli.modules.d/foreman.yml configuration file manually: Important Use only spaces for indentation in hammer configuration files. Do not use tabs for indentation in hammer configuration files. 1.2.3. Command Line If you do not have your Satellite credentials saved in the ~/.hammer/cli.modules.d/foreman.yml configuration file, hammer prompts you for them each time you enter a command. You can specify your credentials when executing a command as follows: Note Examples in this guide assume that you have saved credentials in the configuration file, or are using a hammer authentication session. 1.3. Using Standalone Hammer You can install hammer on a host running Red Hat Enterprise Linux 8 or Red Hat Enterprise Linux 7 that has no Satellite Server installed, and use it to connect the host to a remote Satellite. Prerequisites Ensure that you register the host to Satellite Server or Capsule Server. For more information, see Registering Hosts in Managing Hosts . Ensure that you synchronize the following repositories on Satellite Server or Capsule Server. For more information, see Synchronizing Repositories in Managing Content . On Red Hat Enterprise Linux 8: rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms satellite-utils-6.11-for-rhel-8-x86_64-rpms On Red Hat Enterprise Linux 7: rhel-7-server-rpms rhel-7-server-satellite-utils-6.11-rpms rhel-server-rhscl-7-rpms Procedure On a host, complete the following steps to install hammer : Enable the required repositories: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 7: If your host is running Red Hat Enterprise Linux 8, enable the Satellite Utils module: Install hammer : On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 7: Edit the :host: entry in the /etc/hammer/cli.modules.d/foreman.yml file to include the Satellite IP address or FQDN. 1.4. Setting a Default Organization and Location Many hammer commands are organization specific. You can set a default organization and location for hammer commands so that you do not have to specify them every time with the --organization and --location options. Specifying a default organization is useful when you mostly manage a single organization, as it makes your commands shorter. However, when you switch to a different organization, you must use hammer with the --organization option to specify it. Procedure To set a default organization and location, complete the following steps: To set a default organization, enter the following command: You can find the name of your organization with the hammer organization list command. Optional: To set a default location, enter the following command: You can find the name of your location with the hammer location list command. To verify the currently specified default settings, enter the following command: 1.5. Configuring Hammer The default location for global hammer configuration is: /etc/hammer/cli_config.yml for general hammer settings /etc/hammer/cli.modules.d/ for CLI module configuration files You can set user specific directives for hammer (in ~/.hammer/cli_config.yml ) as well as for CLI modules (in respective .yml files under ~/.hammer/cli.modules.d/ ). To see the order in which configuration files are loaded, as well as versions of loaded modules, use: Note Loading configuration for many CLI modules can slow down the execution of hammer commands. In such a case, consider disabling CLI modules that are not regularly used. Apart from saving credentials as described in Section 1.2, "Authentication" , you can set several other options in the ~/.hammer/ configuration directory. For example, you can change the default log level and set log rotation with the following directives in ~/.hammer/cli_config.yml . These directives affect only the current user and are not applied globally. Similarly, you can configure user interface settings. For example, set the number of entries displayed per request in the Hammer output by changing the following line: This setting is an equivalent of the --per-page Hammer option. 1.6. Configuring Hammer Logging You can set hammer to log debugging information for various Satellite components. You can set debug or normal configuration options for all Satellite components. Note After changing hammer's logging behavior, you must restart Satellite services. To set debug level for all components, use the following command: To set production level logging, use the following command: To list the currently recognized components, that you can set logging for: To list all available logging options: 1.7. Invoking the Hammer Shell You can issue hammer commands through the interactive shell. To invoke the shell, issue the following command: In the shell, you can enter sub-commands directly without typing "hammer", which can be useful for testing commands before using them in a script. To exit the shell, type exit or press Ctrl + D . 1.8. Generating Formatted Output You can modify the default formatting of the output of hammer commands to simplify the processing of this output by other command line tools and applications. For example, to list organizations in a CSV format with a custom separator (in this case a semicolon), use the following command: Output in CSV format is useful for example when you need to parse IDs and use them in a for loop. Several other formatting options are available with the --output option: Replace output_format with one of: table - generates output in the form of a human readable table (default). base - generates output in the form of key-value pairs. yaml - generates output in the YAML format. csv - generates output in the Comma Separated Values format. To define a custom separator, use the --csv and --csv-separator options instead. json - generates output in the JavaScript Object Notation format. silent - suppresses the output. 1.9. Hiding Header Output from Hammer Commands When you use any hammer command, you have the option of hiding headers from the output. If you want to pipe or use the output in custom scripts, hiding the output is useful. To hide the header output, add the --no-headers option to any hammer command. 1.10. Using JSON for Complex Parameters JSON is the preferred way to describe complex parameters. An example of JSON formatted content appears below: 1.11. Troubleshooting with Hammer You can use the hammer ping command to check the status of core Satellite services. Together with the satellite-maintain service status command, this can help you to diagnose and troubleshoot Satellite issues. If all services are running as expected, the output looks as follows:
|
[
"USD hammer --help",
"USD hammer organization --help",
"USD hammer | less",
":foreman: :use_sessions: true",
"hammer auth login",
"hammer settings set --name idle_timeout --value 30 Setting [idle_timeout] updated to [30]",
"hammer auth status",
"hammer auth logout",
":foreman: :username: ' username ' :password: ' password '",
"USD hammer -u username -p password subcommands",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-utils-6.11-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-satellite-utils-6.11-rpms --enable=rhel-server-rhscl-7-rpms",
"dnf module enable satellite-utils:el8",
"dnf install rubygem-hammer_cli_katello",
"yum install tfm-rubygem-hammer_cli_katello",
":host: 'https:// satellite.example.com '",
"hammer defaults add --param-name organization --param-value \"Your_Organization\"",
"hammer defaults add --param-name location --param-value \"Your_Location\"",
"hammer defaults list",
"hammer -d --version",
":log_level: 'warning' :log_size: 5 #in MB",
":per_page: 30",
"satellite-maintain service restart",
"hammer admin logging --all --level-debug satellite-maintain service restart",
"hammer admin logging --all --level-production satellite-maintain service restart",
"hammer admin logging --list",
"hammer admin logging --help Usage: hammer admin logging [OPTIONS]",
"hammer shell",
"hammer --csv --csv-separator \";\" organization list",
"hammer --output output_format organization list",
"hammer compute-profile values create --compute-profile-id 22 --compute-resource-id 1 --compute-attributes= '{ \"cpus\": 2, \"corespersocket\": 2, \"memory_mb\": 4096, \"firmware\": \"efi\", \"resource_pool\": \"Resources\", \"cluster\": \"Example_Cluster\", \"guest_id\": \"rhel8\", \"path\": \"/Datacenters/EXAMPLE/vm/\", \"hardware_version\": \"Default\", \"memoryHotAddEnabled\": 0, \"cpuHotAddEnabled\": 0, \"add_cdrom\": 0, \"boot_order\": [ \"disk\", \"network\" ], \"scsi_controllers\":[ { \"type\": \"ParaVirtualSCSIController\", \"key\":1000 }, { \"type\": \"ParaVirtualSCSIController\", \"key\":1001 }it ] }'",
"hammer ping candlepin: Status: ok Server Response: Duration: 22ms candlepin_auth: Status: ok Server Response: Duration: 17ms pulp: Status: ok Server Response: Duration: 41ms pulp_auth: Status: ok Server Response: Duration: 23ms foreman_tasks: Status: ok Server Response: Duration: 33ms"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cli_guide/chap-CLI_Guide-Introduction_to_Hammer
|
Chapter 234. MVEL Component
|
Chapter 234. MVEL Component Available as of Camel version 2.12 The mvel: component allows you to process a message using an MVEL template. This can be ideal when using Templating to generate responses for requests. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mvel</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 234.1. URI format mvel:templateName[?options] Where templateName is the classpath-local URI of the template to invoke; or the complete URL of the remote template (eg: file://folder/myfile.mvel ). You can append query options to the URI in the following format, ?option=value&option=value&... 234.2. Options The MVEL component supports 2 options, which are listed below. Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so imposes a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. false boolean The MVEL endpoint is configured using URI syntax: with the following path and query parameters: 234.2.1. Path Parameters (1 parameters): Name Description Default Type resourceUri Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 234.2.2. Query Parameters (5 parameters): Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so imposes a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. false boolean contentCache (producer) Sets whether to use resource content cache or not false boolean encoding (producer) Character encoding of the resource content. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 234.3. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.mvel.enabled Enable mvel component true Boolean camel.component.mvel.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.language.mvel.enabled Enable mvel language true Boolean camel.language.mvel.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks true Boolean 234.4. Message Headers The mvel component sets a couple headers on the message. Header Description CamelMvelResourceUri The templateName as a String object. 234.5. MVEL Context Camel will provide exchange information in the MVEL context (just a Map ). The Exchange is transfered as: key value exchange The Exchange itself. exchange.properties The Exchange properties. headers The headers of the In message. camelContext The Camel Context intance. request The In message. in The In message. body The In message body. out The Out message (only for InOut message exchange pattern). response The Out message (only for InOut message exchange pattern). 234.6. Hot reloading The mvel template resource is, by default, hot reloadable for both file and classpath resources (expanded jar). If you set contentCache=true , Camel will only load the resource once, and thus hot reloading is not possible. This scenario can be used in production, when the resource never changes. 234.7. Dynamic templates Camel provides two headers by which you can define a different resource location for a template or the template content itself. If any of these headers is set then Camel uses this over the endpoint configured resource. This allows you to provide a dynamic template at runtime. Header Type Description CamelMvelResourceUri String A URI for the template resource to use instead of the endpoint configured. CamelMvelTemplate String The template to use instead of the endpoint configured. 234.8. Samples For example you could use something like from("activemq:My.Queue"). to("mvel:com/acme/MyResponse.mvel"); To use a MVEL template to formulate a response to a message for InOut message exchanges (where there is a JMSReplyTo header). To specify what template the component should use dynamically via a header, so for example: from("direct:in"). setHeader("CamelMvelResourceUri").constant("path/to/my/template.mvel"). to("mvel:dummy?allowTemplateFromHeader=true"); To specify a template directly as a header the component should use dynamically via a header, so for example: from("direct:in"). setHeader("CamelMvelTemplate").constant("@{\"The result is \" + request.body * 3}\" }"). to("velocity:dummy?allowTemplateFromHeader=true"); Warning Enabling the allowTemplateFromHeader option has security ramifications. For example, if the header contains untrusted or user derived content, this can ultimately impact on the confidentility and integrity of your end application, so use this option with caution. 234.9. See Also Configuring Camel Component Endpoint Getting Started
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-mvel</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"mvel:templateName[?options]",
"mvel:resourceUri",
"from(\"activemq:My.Queue\"). to(\"mvel:com/acme/MyResponse.mvel\");",
"from(\"direct:in\"). setHeader(\"CamelMvelResourceUri\").constant(\"path/to/my/template.mvel\"). to(\"mvel:dummy?allowTemplateFromHeader=true\");",
"from(\"direct:in\"). setHeader(\"CamelMvelTemplate\").constant(\"@{\\\"The result is \\\" + request.body * 3}\\\" }\"). to(\"velocity:dummy?allowTemplateFromHeader=true\");"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/mvel-component
|
2.8.2.5. Saving the Settings
|
2.8.2.5. Saving the Settings Click OK to save the changes and enable or disable the firewall. If Enable firewall was selected, the options selected are translated to iptables commands and written to the /etc/sysconfig/iptables file. The iptables service is also started so that the firewall is activated immediately after saving the selected options. If Disable firewall was selected, the /etc/sysconfig/iptables file is removed and the iptables service is stopped immediately. The selected options are also written to the /etc/sysconfig/system-config-firewall file so that the settings can be restored the time the application is started. Do not edit this file manually. Even though the firewall is activated immediately, the iptables service is not configured to start automatically at boot time. Refer to Section 2.8.2.6, "Activating the IPTables Service" for more information.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-basic_firewall_configuration-saving_the_settings
|
5.3.4. Creating the New Logical Volume
|
5.3.4. Creating the New Logical Volume After creating the new volume group, you can create the new logical volume yourlv .
|
[
"lvcreate -L5G -n yourlv yourvg Logical volume \"yourlv\" created"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vol_create_ex3
|
Chapter 8. Known issues
|
Chapter 8. Known issues This part describes known issues in Red Hat Enterprise Linux 9.0. 8.1. Installer and image creation The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) Local Media installation source is not detected when booting the installation from a USB that is created using a third party tool When booting the RHEL installation from a USB that is created using a third party tool, the installer fails to detect the Local Media installation source (only Red Hat CDN is detected). This issue occurs because the default boot option int.stage2= attempts to search for iso9660 image format. However, a third party tool might create an ISO image with a different format. As a workaround, use either of the following solution: When booting the installation, click the Tab key to edit the kernel command line, and change the boot option inst.stage2= to inst.repo= . To create a bootable USB device on Windows, use Fedora Media Writer. When using a third party tool like Rufus to create a bootable USB device, first regenerate the RHEL ISO image on a Linux system, and then use the third party tool to create a bootable USB device. For more information on the steps involved in performing any of the specified workaround, see, Installation media is not auto detected during the installation of RHEL 8.3 . (BZ#1877697) The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) Unexpected SELinux policies on systems where Anaconda is running as an application When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the -image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system and execute it in a temporary virtual machine. So that the SELinux policy on a production system is not modified. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue. ( BZ#2050140 ) The USB CD-ROM drive is not available as an installation source in Anaconda Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk. To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail. ( BZ#1914955 ) Minimal RHEL installation no longer includes the s390utils-base package In RHEL 8.4 and later, the s390utils-base package is split into an s390utils-core package and an auxiliary s390utils-base package. Consequently, setting the RHEL installation to minimal-environment installs only the necessary s390utils-core package and not the auxiliary s390utils-base package. To work around this problem, manually install the s390utils-base package after completing the RHEL installation or explicitly install s390utils-base using a kickstart file. (BZ#1932480) Hard drive partitioned installations with iso9660 filesystem fails You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD. To workaround this problem, add the following script in the kickstart file to format the disc before the installation starts. Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk. As a result, installations work as expected without any errors. ( BZ#1929105 ) Anaconda fails to verify existence of an administrator user account While installing RHEL using a graphical user interface, Anaconda fails to verify if the administrator account has been created. As a consequence, users might install a system without any administrator user account. To work around this problem, ensure you configure an administrator user account or the root password is set and the root account is unlocked. As a result, users can perform administrative tasks on the installed system. ( BZ#2047713 ) Anaconda fails to login iSCSI server using the no authentication method after unsuccessful CHAP authentication attempt When you add iSCSI discs using CHAP authentication and the login attempt fails due to incorrect credentials, a relogin attempt to the discs with the no authentication method fails. To workaround this problem, close the current session and login using the no authentication method. (BZ#1983602) New XFS features prevent booting of PowerNV IBM POWER systems with firmware older than version 5.10 PowerNV IBM POWER systems use a Linux kernel for firmware, and use Petitboot as a replacement for GRUB. This results in the firmware kernel mounting /boot and Petitboot reading the GRUB config and booting RHEL. The RHEL 9 kernel introduces bigtime=1 and inobtcount=1 features to the XFS filesystem, which kernels with firmware older than version 5.10 do not understand. To work around this problem, you can use another filesystem for /boot , for example ext4. (BZ#1997832) Cannot install RHEL when PReP is not 4 or 8 MiB in size The RHEL installer cannot install the boot loader if the PowerPC Reference Platform (PReP) partition is of a different size than 4 MiB or 8 MiB on a disk that uses 4 kiB sectors. As a consequence, you cannot install RHEL on the disk. To work around the problem, make sure that the PReP partition is exactly 4 MiB or 8 MiB in size, and that the size is not rounded to another value. As a result, the installer can now install RHEL on the disk. (BZ#2026579) New XFS features prevent booting of PowerNV IBM POWER systems with firmware kernel older than version 5.10 PowerNV IBM POWER systems use a Linux kernel for firmware, and use Petitboot as a replacement for GRUB. This results in the firmware kernel mounting /boot and Petitboot reading the GRUB config and booting RHEL. The RHEL 9 kernel introduces bigtime=1 and inobtcount=1 features to the XFS filesystem, which firmware with kernel older than version 5.10 do not understand. As a consequence, Anaconda prevents the installation with the following error message: Your firmware doesn't support XFS file system features on the /boot file system. The system will not be bootable. Please, upgrade the firmware or change the file system type. As a workaround, use another filesystem for /boot , for example ext4 . (BZ#2008792) RHEL installer does not process the inst.proxy boot option correctly When running Anaconda, the installation program does not process the inst.proxy boot option correctly. As a consequence, you cannot use the specified proxy to fetch the installation image. To work around this issue: * Use the latest version of RHEL distribution. * Use proxy instead of inst.proxy boot option. (JIRA:RHELDOCS-18764) RHEL installation fails on IBM Z architectures with multi-LUNs RHEL installation fails on IBM Z architectures when using multiple LUNs during installation. Due to the multipath setup of FCP and the LUN auto-scan behavior, the length of the kernel command line in the configuration file exceeds 896 bytes. To work around this problem, you can do one of the following: Install the latest version of RHEL (RHEL 9.2 or later). Install the RHEL system with a single LUN and add additional LUNs post installation. Optimize the redundant zfcp entries in the boot configuration on the installed system. Create a physical volume ( pvcreate ) for each of the additional LUNs listed under /dev/mapper/ . Extend the VG with PVs, for example, vgextend <vg_name> /dev/mapper/mpathX . Increase the LV as needed for example, lvextend -r -l +100%FREE /dev/<vg name>/root . For more information, see the KCS solution . (JIRA:RHELDOCS-18638) RHEL installer does not automatically discover or use iSCSI devices as boot devices on aarch64 The absence of the iscsi_ibft kernel module in RHEL installers running on aarch64 prevents automatic discovery of iSCSI devices defined in firmware. These devices are not automatically visible in the installer nor selectable as boot devices when added manually by using the GUI. As a workaround, add the "inst.nonibftiscsiboot" parameter to the kernel command line when booting the installer and then manually attach iSCSI devices through the GUI. As a result, the installer can recognize the attached iSCSI devices as bootable and installation completes as expected. For more information, see KCS solution . (JIRA:RHEL-56135) Kickstart installation fails with an unknown disk error when 'ignoredisk' command precedes 'iscsi' command Installing RHEL by using the kickstart method fails if the ignoredisk command is placed before the iscsi command. This issue occurs because the iscsi command attaches the specified iSCSI device during command parsing, while the ignoredisk command resolves device specifications simultaneously. If the ignoredisk command references an iSCSI device name before it is attached by the iscsi command, the installation fails with an "unknown disk" error. As a workaround, ensure that the iscsi command is placed before the ignoredisk command in the Kickstart file to reference the iSCSI disk and enable successful installation. (JIRA:RHEL-13837) The services Kickstart command fails to disable the firewalld service A bug in Anaconda prevents the services --disabled=firewalld command from disabling the firewalld service in Kickstart. To work around this problem, use the firewall --disabled command instead. As a result, the firewalld service is disabled properly. (JIRA:RHEL-82566) 8.2. Subscription management virt-who cannot connect to ESX servers when in FIPS mode When using the virt-who utility on a RHEL 9 system in FIPS mode, virt-who cannot connect to ESX servers. As a consequence, virt-who does not report any ESX servers, even if configured for them, and logs the following error message: To work around this issue, do one of the following: Do not set the RHEL 9 system you use for running virt-who to FIPS mode. Do not upgrade the RHEL system you use for running virt-who to version 9.0. ( BZ#2054504 ) 8.3. Software management The Installation process sometimes becomes unresponsive When you install RHEL, the installation process sometimes becomes unresponsive. The /tmp/packaging.log file displays the following message at the end: To workaround this problem, restart the installation process. ( BZ#2073510 ) 8.4. Shells and command-line tools ReaR fails during recovery if the TMPDIR variable is set in the configuration file Setting and exporting TMPDIR in the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file does not work and is deprecated. The ReaR default configuration file /usr/share/rear/conf/default.conf contains the following instructions: The instructions mentioned above do not work correctly because the TMPDIR variable has the same value in the rescue environment, which is not correct if the directory specified in the TMPDIR variable does not exist in the rescue image. As a consequence, setting and exporting TMPDIR in the /etc/rear/local.conf file leads to the following error when the rescue image is booted : or the following error and abort later, when running rear recover : To work around this problem, if you want to have a custom temporary directory, specify a custom directory for ReaR temporary files by exporting the variable in the shell environment before executing ReaR. For example, execute the export TMPDIR=... statement and then execute the rear command in the same shell session or script. As a result, the recovery is successful in the described configuration. Jira:RHEL-24847 Renaming network interfaces using ifcfg files fails On RHEL 9, the initscripts package is not installed by default. Consequently, renaming network interfaces using ifcfg files fails. To solve this problem, Red Hat recommends that you use udev rules or link files to rename interfaces. For further details, see Consistent network interface device naming and the systemd.link(5) man page. If you cannot use one of the recommended solutions, install the initscripts package. (BZ#2018112) The chkconfig package is not installed by default in RHEL 9 The chkconfig package, which updates and queries runlevel information for system services, is not installed by default in RHEL 9. To manage services, use the systemctl commands or install the chkconfig package manually. For more information about systemd , see Managing systemd . For instructions on how to use the systemctl utility, see Managing system services with systemctl . (BZ#2053598) 8.5. Infrastructure services Both bind and unbound disable validation of SHA-1-based signatures The bind and unbound components disable validation support of all RSA/SHA1 (algorithm number 5) and RSASHA1-NSEC3-SHA1 (algorithm number 7) signatures, and the SHA-1 usage for signatures is restricted in the DEFAULT system-wide cryptographic policy. As a result, certain DNSSEC records signed with the SHA-1, RSA/SHA1, and RSASHA1-NSEC3-SHA1 digest algorithms fail to verify in Red Hat Enterprise Linux 9 and the affected domain names become vulnerable. To work around this problem, upgrade to a different signature algorithm, such as RSA/SHA-256 or elliptic curve keys. For more information and a list of top-level domains that are affected and vulnerable, see the DNSSEC records signed with RSASHA1 fail to verify solution. ( BZ#2070495 ) named fails to start if the same writable zone file is used in multiple zones BIND does not allow the same writable zone file in multiple zones. Consequently, if a configuration includes multiple zones which share a path to a file that can be modified by the named service, named fails to start. To work around this problem, use the in-view clause to share one zone between multiple views and make sure to use different paths for different zones. For example, include the view names in the path. Note that writable zone files are typically used in zones with allowed dynamic updates, slave zones, or zones maintained by DNSSEC. ( BZ#1984982 ) Setting the console keymap requires the libxkbcommon library on your minimal install In RHEL 9, certain systemd library dependencies have been converted from dynamic linking to dynamic loading, so that your system opens and uses the libraries at runtime when they are available. With this change, a functionality that depends on such libraries is not available unless you install the necessary library. This also affects setting the keyboard layout on systems with a minimal install. As a result, the localectl --no-convert set-x11-keymap gb command fails. To work around this problem, install the libxkbcommon library: ( BZ#2214130 ) 8.6. Security OpenSSL does not detect if a PKCS #11 token supports the creation of raw RSA or RSA-PSS signatures The TLS 1.3 protocol requires support for RSA-PSS signatures. If a PKCS #11 token does not support raw RSA or RSA-PSS signatures, server applications that use the OpenSSL library fail to work with an RSA key if the key is held by the PKCS #11 token. As a result, TLS communication fails in the described scenario. To work around this problem, configure servers and clients to use TLS version 1.2 as the highest TLS protocol version available. (BZ#1681178) OpenSSL incorrectly handles PKCS #11 tokens that does not support raw RSA or RSA-PSS signatures The OpenSSL library does not detect key-related capabilities of PKCS #11 tokens. Consequently, establishing a TLS connection fails when a signature is created with a token that does not support raw RSA or RSA-PSS signatures. To work around the problem, add the following lines after the .include line at the end of the crypto_policy section in the /etc/pki/tls/openssl.cnf file: As a result, a TLS connection can be established in the described scenario. (BZ#1685470) Cryptography not approved by FIPS works in OpenSSL in FIPS mode Cryptography that is not FIPS-approved works in the OpenSSL toolkit regardless of system settings. Consequently, you can use cryptographic algorithms and ciphers that should be disabled when the system is running in FIPS mode, for example: TLS cipher suites using the RSA key exchange work. RSA-based algorithms for public-key encryption and decryption work despite using the PKCS #1 and SSLv23 paddings or using keys shorter than 2048 bits. ( BZ#2053289 ) OpenSSL cannot use engines in FIPS mode Engine API is deprecated in OpenSSL 3.0 and is incompatible with OpenSSL Federal Information Processing Standards (FIPS) implementation and other FIPS-compatible implementations. Therefore, OpenSSL cannot run engines in FIPS mode. There is no workaround for this problem. ( BZ#2087253 ) PSK ciphersuites do not work with the FUTURE crypto policy Pre-shared key (PSK) ciphersuites are not recognized as performing perfect forward secrecy (PFS) key exchange methods. As a consequence, the ECDHE-PSK and DHE-PSK ciphersuites do not work with OpenSSL configured to SECLEVEL=3 , for example with the FUTURE crypto policy. As a workaround, you can set a less restrictive crypto policy or set a lower security level ( SECLEVEL ) for applications that use PSK ciphersuites. ( BZ#2060044 ) GnuPG incorrectly allows using SHA-1 signatures even if disallowed by crypto-policies The GNU Privacy Guard (GnuPG) cryptographic software can create and verify signatures that use the SHA-1 algorithm regardless of the settings defined by the system-wide cryptographic policies. Consequently, you can use SHA-1 for cryptographic purposes in the DEFAULT cryptographic policy, which is not consistent with the system-wide deprecation of this insecure algorithm for signatures. To work around this problem, do not use GnuPG options that involve SHA-1. As a result, you will prevent GnuPG from lowering the default system security by using the non-secure SHA-1 signatures. ( BZ#2070722 ) Some OpenSSH operations do not used FIPS-approved interfaces The OpenSSL cryptographic library, which is used by OpenSSH, provides two interfaces: legacy and modern. Because of changes to OpenSSL internals, only the modern interfaces use FIPS-certified implementations of cryptographic algorithms. Because OpenSSH uses legacy interfaces for some operations, it does not comply with FIPS requirements. ( BZ#2087121 ) gpg-agent does not work as an SSH agent in FIPS mode The gpg-agent tool creates MD5 fingerprints when adding keys to the ssh-agent program even though FIPS mode disables the MD5 digest. Consequently, the ssh-add utility fails to add the keys to the authentication agent. To work around the problem, create the ~/.gnupg/sshcontrol file without using the gpg-agent --daemon --enable-ssh-support command. For example, you can paste the output of the gpg --list-keys command in the <FINGERPRINT> 0 format to ~/.gnupg/sshcontrol . As a result, gpg-agent works as an SSH authentication agent. ( BZ#2073567 ) SELinux staff_u users can incorrectly switch to unconfined_r When the secure_mode boolean is enabled, staff_u users can incorrectly switch to the unconfined_r role. As a consequence, staff_u users can perform privileged operations affecting the security of the system. ( BZ#2021529 ) OpenSSH in RHEL 9.0-9.3 is not compatible with OpenSSL 3.2.2 The openssh packages provided by RHEL 9.0, 9.1, 9.2, and 9.3 strictly check for the OpenSSL version. Consequently, if you upgrade the openssl packages to version 3.2.2 and higher and you keep the openssh packages in version 8.7p1-34.el9_3.3 or earlier, the sshd service fails to start with an OpenSSL version mismatch error message. To work around this problem, upgrade the openssh packages to version 8.7p1-38.el9 and later. See the sshd not working, OpenSSL version mismatch solution (Red Hat Knowledgebase) for more information. (JIRA:RHELDOCS-19626) Default SELinux policy allows unconfined executables to make their stack executable The default state of the selinuxuser_execstack boolean in the SELinux policy is on, which means that unconfined executables can make their stack executable. Executables should not use this option, and it might indicate poorly coded executables or a possible attack. However, due to compatibility with other tools, packages, and third-party products, Red Hat cannot change the value of the boolean in the default policy. If your scenario does not depend on such compatibility aspects, you can turn the boolean off in your local policy by entering the command setsebool -P selinuxuser_execstack off . ( BZ#2064274 ) Remediating service-related rules during kickstart installations might fail During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues. ( BZ#1834716 ) SSH timeout rules in STIG profiles configure incorrect options An update of OpenSSH affected the rules in the following Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) profiles: DISA STIG for RHEL 9 ( xccdf_org.ssgproject.content_profile_stig ) DISA STIG with GUI for RHEL 9 ( xccdf_org.ssgproject.content_profile_stig_gui ) In each of these profiles, the following two rules are affected: When applied to SSH servers, each of these rules configures an option ( ClientAliveCountMax and ClientAliveInterval ) that no longer behaves as previously. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by these rules. As a workaround, these rules have been temporarily removed from the DISA STIG for RHEL 9 and DISA STIG with GUI for RHEL 9 profiles until a solution is developed. ( BZ#2038978 ) fagenrules --load does not work correctly The fapolicyd service does not correctly handle the signal hang up (SIGHUP). Consequently, fapolicyd terminates after receiving the SIGHUP signal. Therefore, the fagenrules --load command does not work properly, and rule updates require manual restarts of fapolicyd . To work around this problem, restart the fapolicyd service after any change in rules, and as a result fagenrules --load will work correctly. ( BZ#2070655 ) Ansible remediations require additional collections With the replacement of Ansible Engine by the ansible-core package, the list of Ansible modules provided with the RHEL subscription is reduced. As a consequence, running remediations that use Ansible content included within the scap-security-guide package requires collections from the rhc-worker-playbook package. For an Ansible remediation, perform the following steps: Install the required packages: Navigate to the /usr/share/scap-security-guide/ansible directory: # cd /usr/share/scap-security-guide/ansible Run the relevant Ansible playbook using environment variables that define the path to the additional Ansible collections: # ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -c local -i localhost, rhel9-playbook- cis_server_l1 .yml Replace cis_server_l1 with the ID of the profile against which you want to remediate the system. As a result, the Ansible content is processed correctly. Note Support of the collections provided in rhc-worker-playbook is limited to enabling the Ansible content sourced in scap-security-guide . ( BZ#2105162 ) 8.7. Networking The nm-cloud-setup service removes manually-configured secondary IP addresses from interfaces Based on the information received from the cloud environment, the nm-cloud-setup service configures network interfaces. Disable nm-cloud-setup to manually configure interfaces. However, in certain cases, other services on the host can configure interfaces as well. For example, these services could add secondary IP addresses. To avoid that nm-cloud-setup removes secondary IP addresses: Stop and disable the nm-cloud-setup service and timer: Display the available connection profiles: Reactive the affected connection profiles: As a result, the service no longer removes manually-configured secondary IP addresses from interfaces. ( BZ#2151040 ) An empty rd.znet option in the kernel command line causes the network configuration to fail An rd.znet option without any arguments, such as net types or subchannels, in the kernel fails to configure networking. To work around this problem, either remove the rd.znet option from the command line completely or specify relevant net types, subchannels, and other relevant options. For more information about these options, see the dracut.cmdline(7) man page. (BZ#1931284) Failure to update the session key causes the connection to break Kernel Transport Layer Security (kTLS) protocol does not support updating the session key, which is used by the symmetric cipher. Consequently, the user cannot update the key, which causes a connection break. To work around this problem, disable kTLS. As a result, with the workaround, it is possible to successfully update the session key. (BZ#2013650) The initscripts package is not installed by default By default, the initscripts package is not installed. As a consequence, the ifup and ifdown utilities are not available. As an alternative, use the nmcli connection up and nmcli connection down commands to enable and disable connections. If the suggested alternative does not work for you, report the problem and install the NetworkManager-initscripts-updown package, which provides a NetworkManager solution for the ifup and ifdown utilities. ( BZ#2082303 ) The primary IP address of an instance changes after starting the nm-cloud-setup service in Alibaba Cloud After launching an instance in the Alibaba Cloud, the nm-cloud-setup service assigns the primary IP address to an instance. However, if you assign multiple secondary IP addresses to an instance and start the nm-cloud-setup service, the former primary IP address gets replaced by one of the already assigned secondary IP addresses. The returned list of metadata verifies the same. To work around the problem, configure secondary IP addresses manually to avoid that the primary IP address changes. As a result, an instance retains both IP addresses and the primary IP address does not change. ( BZ#2079849 ) 8.8. Kernel kdump fails to start on RHEL 9 kernel The RHEL 9 kernel does not have the crashkernel=auto parameter configured as default. Consequently, the kdump service fails to start by default. To work around this problem, configure the crashkernel= option to the required value. For example, to reserve 256 MB of memory using the grubby utility, enter the following command: As a result, the RHEL 9 kernel starts kdump and uses the configured memory size value to dump the vmcore file. (BZ#1894783) The kdump mechanism fails to capture vmcore on LUKS-encrypted targets When running kdump on systems with Linux Unified Key Setup (LUKS) encrypted partitions, systems require a certain amount of available memory. When the available memory is less than the required amount of memory, the systemd-cryptsetup service fails to mount the partition. Consequently, the second kernel fails to capture the crash dump file ( vmcore ) on LUKS-encrypted targets. With the kdumpctl estimate command, you can query the Recommended crashkernel value , which is the recommended memory size required for kdump . To work around this issue, use following steps to configure the required memory for kdump on LUKS encrypted targets: Print the estimate crashkernel value: Configure the amount of required memory by increasing the crashkernel value: Reboot the system for changes to take effect. As a result, kdump works correctly on systems with LUKS-encrypted partitions. (BZ#2017401) Allocating crash kernel memory fails at boot time On certain Ampere Altra systems, allocating the crash kernel memory for kdump usage fails during boot when the available memory is below 1 GB. Consequently, the kdumpctl command fails to start the kdump service as the required memory is more than the available memory size. As a workaround, decrease the value of the crashkernel parameter by a minimum of 240 MB to fit the size requirement, for example crashkernel=240M . As a result, the crash kernel memory allocation for kdump does not fail on Ampere Altra systems. ( BZ#2065013 ) kTLS does not support offloading of TLS 1.3 to NICs Kernel Transport Layer Security (kTLS) does not support offloading of TLS 1.3 to NICs. Consequently, software encryption is used with TLS 1.3 even when the NICs support TLS offload. To work around this problem, disable TLS 1.3 if offload is required. As a result, you can offload only TLS 1.2. When TLS 1.3 is in use, there is lower performance, since TLS 1.3 cannot be offloaded. (BZ#2000616) FADump enabled with Secure Boot might lead to GRUB Out of Memory (OOM) In the Secure Boot environment, GRUB and PowerVM together allocate a 512 MB memory region, known as the Real Mode Area (RMA), for boot memory. The region is divided among the boot components and, if any component exceeds its allocation, out-of-memory failures occur. Generally, the default installed initramfs file system and the vmlinux symbol table are within the limits to avoid such failures. However, if Firmware Assisted Dump (FADump) is enabled in the system, the default initramfs size can increase and exceed 95 MB. As a consequence, every system reboot leads to a GRUB OOM state. To avoid this issue, do not use Secure Boot and FADump together. For more information and methods on how to work around this issue, see link:https://www.ibm.com/support/pages/node/6846531. (BZ#2149172) Systems in Secure Boot cannot run dynamic LPAR operations Users cannot run dynamic logical partition (DLPAR) operations from the Hardware Management Console (HMC) if either of these conditions are met: The Secure Boot feature is enabled that implicitly enables kernel lockdown mechanism in integrity mode. The kernel lockdown mechanism is manually enabled in integrity or confidentiality mode. In RHEL 9, kernel lockdown completely blocks Run Time Abstraction Services (RTAS) access to system memory accessible through the /dev/mem character device file. Several RTAS calls require write access to /dev/mem to function properly. Consequently, RTAS calls do not execute correctly and users see the following error message: (BZ#2083106) dkms provides an incorrect warning on program failure with correctly compiled drivers on 64-bit ARM CPUs The Dynamic Kernel Module Support ( dkms ) utility does not recognize that the kernel headers for 64-bit ARM CPUs work for both the kernels with 4 kilobytes and 64 kilobytes page sizes. As a result, when the kernel update is performed and the kernel-64k-devel package is not installed, dkms provides an incorrect warning on why the program failed on correctly compiled drivers. To work around this problem, install the kernel-headers package, which contains header files for both types of ARM CPU architectures and is not specific to dkms and its requirements. (JIRA:RHEL-25967) 8.9. Boot loader New kernels lose command-line options The GRUB boot loader does not apply custom, previously configured kernel command-line options to new kernels. Consequently, when you upgrade the kernel package, the system behavior might change after reboot due to the missing options. To work around the problem, manually add all custom kernel command-line options after each kernel upgrade. As a result, the kernel applies custom options as expected, until the kernel upgrade. ( BZ#1969362 ) 8.10. File systems and storage Device Mapper Multipath is not supported with NVMe/TCP Using Device Mapper Multipath with the nvme-tcp driver can result in the Call Trace warnings and system instability. To work around this problem, NVMe/TCP users must enable native NVMe multipathing and not use the device-mapper-multipath tools with NVMe. By default, Native NVMe multipathing is enabled in RHEL 9. For more information, see Enabling multipathing on NVMe devices . (BZ#2033080) The blk-availability systemd service deactivates complex device stacks In systemd , the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command: As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages. (BZ#2011699) Invalid sysfs value for supported_speeds The qla2xxx driver reports 20Gb/s instead of the expected 64Gb/s as one of the supported port speeds in the sysfs supported_speeds attribute: As a consequence, if the HBA supports 64Gb/s link speed, the sysfs supported_speeds value is incorrect. This affects only the supported_speeds value of sysfs and the port operates at the expected negotiated link rate. (BZ#2069758) Unable to connect to NVMe namespaces from Broadcom initiator on AMD EPYC systems By default, the RHEL kernel enables the IOMMU on AMD-based platforms. Consequently, when you use IOMMU-enabled platforms on servers with AMD processors, you might experience NVMe I/O problems, such as I/Os failing due to transfer length mismatches. To work around this problem, add the IOMMU in passthrough mode by using the kernel command-line option, iommu=pt . As a result, you can now connect to NVMe namespaces from Broadcom initiator on AMD EPYC systems. (BZ#2073541) 8.11. Dynamic programming languages, web and database servers The --ssl-fips-mode option in MySQL and MariaDB does not change FIPS mode The --ssl-fips-mode option in MySQL and MariaDB in RHEL works differently than in upstream. In RHEL 9, if you use --ssl-fips-mode as an argument for the mysqld or mariadbd daemon, or if you use ssl-fips-mode in the MySQL or MariaDB server configuration files, --ssl-fips-mode does not change FIPS mode for these database servers. Instead: If you set --ssl-fips-mode to ON , the mysqld or mariadbd server daemon does not start. If you set --ssl-fips-mode to OFF on a FIPS-enabled system, the mysqld or mariadbd server daemons still run in FIPS mode. This is expected because FIPS mode should be enabled or disabled for the whole RHEL system, not for specific components. Therefore, do not use the --ssl-fips-mode option in MySQL or MariaDB in RHEL. Instead, ensure FIPS mode is enabled on the whole RHEL system: Preferably, install RHEL with FIPS mode enabled. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. For information about installing RHEL in FIPS mode, see Installing the system in FIPS mode . Alternatively, you can switch FIPS mode for the entire RHEL system by following the procedure in Switching the system to FIPS mode . ( BZ#1991500 ) 8.12. Compilers and development tools Certain symbol-based probes do not work in SystemTap on the 64-bit ARM architecture Kernel configuration disables certain functionality needed for SystemTap . Consequently, some symbol-based probes do not work on the 64-bit ARM architecture. As a result, affected SystemTap scripts may not run or may not collect hits on desired probe points. Note that this bug has been fixed for the remaining architectures with the release of the RHBA-2022:5259 advisory. (BZ#2083727) 8.13. Identity Management RHEL 9 Kerberos client fails to authenticate a user using PKINIT against Heimdal KDC During the PKINIT authentication of an IdM user on a RHEL 9 Kerberos client, the Heimdal Kerberos Distribution Center (KDC) on RHEL 9 or earlier uses the SHA-1 backup signature algorithm because the Kerberos client does not support the supportedCMSTypes field. However, the SHA-1 algorithm has been deprecated in RHEL 9 and therefore the user authentication fails. To work around this problem, enable support for the SHA-1 algorithm on your RHEL 9 clients with the following command: As a result, PKINIT authentication works between the Kerberos client and Heimdal KDC. For more details about supported backup signature algorithms, see Kerberos Encryption Types Defined for CMS Algorithm Identifiers . See also The PKINIT authentication of a user fails if a RHEL 9 Kerberos agent communicates with a non-RHEL 9 Kerberos agent . ( BZ#2068935 ) The PKINIT authentication of a user fails if a RHEL 9 Kerberos agent communicates with a non-RHEL 9 Kerberos agent If a RHEL 9 Kerberos agent interacts with another, non-RHEL 9 Kerberos agent in your environment, the Public Key Cryptography for initial authentication (PKINIT) authentication of a user fails. To work around the problem, perform one of the following actions: Set the RHEL 9 agent's crypto-policy to DEFAULT:SHA1 to allow the verification of SHA-1 signatures: Update the non-RHEL 9 agent to ensure it does not sign CMS data using the SHA-1 algorithm. For this, update your Kerberos packages to the versions that use SHA-256 instead of SHA-1: CentOS 9 Stream: krb5-1.19.1-15 RHEL 8.7: krb5-1.18.2-17 RHEL 7.9: krb5-1.15.1-53 Fedora Rawhide/36: krb5-1.19.2-7 Fedora 35/34: krb5-1.19.2-3 You must perform one of these actions regardless of whether the non-patched agent is a Kerberos client or the Kerberos Distribution Center (KDC). As a result, the PKINIT authentication of a user works correctly. Note that for other operating systems, it is the krb5-1.20 release that ensures that the agent signs CMS data with SHA-256 instead of SHA-1. See also The DEFAULT:SHA1 sub-policy has to be set on RHEL 9 clients for PKINIT to work against older RHEL KDCs and AD KDCs . ( BZ#2077450 ) The DEFAULT:SHA1 sub-policy has to be set on RHEL 9 clients for PKINIT to work against older RHEL KDCs and AD KDCs The SHA-1 digest algorithm has been deprecated in RHEL 9, and CMS messages for Public Key Cryptography for initial authentication (PKINIT) are now signed with the stronger SHA-256 algorithm. While SHA-256 is used by default starting with RHEL 7.9 and RHEL 8.7, older Kerberos Key Distribution Centers (KDCs) on RHEL 7.8 and RHEL 8.6 and earlier still use the SHA-1 digest algorithm to sign CMS messages. So does the Active Directory (AD) KDC. As a result, RHEL 9 Kerberos clients fail to authenticate users using PKINIT against the following: KDCs running on RHEL 7.8 and earlier KDCs running on RHEL 8.6 and earlier AD KDCs To work around the problem, enable support for the SHA-1 algorithm on your RHEL 9 systems with the following command: See also RHEL 9 Kerberos client fails to authenticate a user using PKINIT against Heimdal KDC . ( BZ#2060798 ) Directory Server terminates unexpectedly when started in referral mode Due to a bug, global referral mode does not work in Directory Server. If you start the ns-slapd process with the refer option as the dirsrv user, Directory Server ignores the port settings and terminates unexpectedly. Trying to run the process as the root user changes SELinux labels and prevents the service from starting in future in normal mode. There are no workarounds available. ( BZ#2053204 ) Configuring a referral for a suffix fails in Directory Server If you set a back-end referral in Directory Server, setting the state of the backend using the dsconf <instance_name> backend suffix set --state referral command fails with the following error: As a consequence, configuring a referral for suffixes fail. To work around the problem: Set the nsslapd-referral parameter manually: Set the back-end state: As a result, with the workaround, you can configure a referral for a suffix. ( BZ#2063140 ) The dsconf utility has no option to create fix-up tasks for the entryUUID plug-in The dsconf utility does not provide an option to create fix-up tasks for the entryUUID plug-in. As a result, administrators cannot not use dsconf to create a task to automatically add entryUUID attributes to existing entries. As a workaround, create a task manually: After the task has been created, Directory Server fixes entries with missing or invalid entryUUID attributes. ( BZ#2047175 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 8.14. Desktop Firefox add-ons are disabled after upgrading to RHEL 9 If you upgrade from RHEL 8 to RHEL 9, all add-ons that you previously enabled in Firefox are disabled. To work around the problem, manually reinstall or update the add-ons. As a result, the add-ons are enabled as expected. ( BZ#2013247 ) VNC is not running after upgrading to RHEL 9 After upgrading from RHEL 8 to RHEL 9, the VNC server fails to start, even if it was previously enabled. To work around the problem, manually enable the vncserver service after the system upgrade: As a result, VNC is now enabled and starts after every system boot as expected. ( BZ#2060308 ) 8.15. Graphics infrastructures Matrox G200e shows no output on a VGA display Your display might show no graphical output if you use the following system configuration: The Matrox G200e GPU A display connected over the VGA controller As a consequence, you cannot use or install RHEL on this configuration. To work around the problem, use the following procedure: Boot the system to the boot loader menu. Add the module_blacklist=mgag200 option to the kernel command line. As a result, RHEL boots and shows graphical output as expected, but the maximum resolution is limited to 1024x768 at the 16-bit color depth. (BZ#1960467) X.org configuration utilities do not work under Wayland X.org utilities for manipulating the screen do not work in the Wayland session. Notably, the xrandr utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. (JIRA:RHELPLAN-121049) NVIDIA drivers might revert to X.org Under certain conditions, the proprietary NVIDIA drivers disable the Wayland display protocol and revert to the X.org display server: If the version of the NVIDIA driver is lower than 470. If the system is a laptop that uses hybrid graphics. If you have not enabled the required NVIDIA driver options. Additionally, Wayland is enabled but the desktop session uses X.org by default if the version of the NVIDIA driver is lower than 510. (JIRA:RHELPLAN-119001) Night Light is not available on Wayland with NVIDIA When the proprietary NVIDIA drivers are enabled on your system, the Night Light feature of GNOME is not available in Wayland sessions. The NVIDIA drivers do not currently support Night Light . (JIRA:RHELPLAN-119852) 8.16. The web console Removing USB host devices using the web console does not work as expected When you attach a USB device to a virtual machine (VM), the device number and bus number of the USB device might change after they are passed to the VM. As a consequence, using the web console to remove such devices fails due to the incorrect correlation of the device and bus numbers. To workaround this problem, remove the <hostdev> part of the USB device, from the VM's XML configuration. (JIRA:RHELPLAN-109067) Attaching multiple host devices using the web console does not work When you select multiple devices to attach to a virtual machine (VM) using the web console, only a single device is attached and the rest are ignored. To work around this problem, attach only one device at a time. (JIRA:RHELPLAN-115603) 8.17. Virtualization Installing a virtual machine over https in some cases fails Currently, the virt-install utility fails when attempting to install a guest operating system from an ISO source over a https connection - for example using virt-install --cdrom https://example/path/to/image.iso . Instead of creating a virtual machine (VM), the described operation terminates unexpectedly with an internal error: process exited while connecting to monitor message. To work around this problem, install qemu-kvm-block-curl on the host to enable https protocol support. Alternatively, use a different connection protocol or a different installation source. ( BZ#2014229 ) Using NVIDIA drivers in virtual machines disables Wayland Currently, NVIDIA drivers are not compatible with the Wayland graphical session. As a consequence, RHEL guest operating systems that use NVIDIA drivers automatically disable Wayland and load an Xorg session instead. This primarily occurs in the following scenarios: When you pass through an NVIDIA GPU device to a RHEL virtual machine (VM) When you assign an NVIDIA vGPU mediated device to a RHEL VM (JIRA:RHELPLAN-117234) The Milan VM CPU type is sometimes not available on AMD Milan systems On certain AMD Milan systems, the Enhanced REP MOVSB ( erms ) and Fast Short REP MOVSB ( fsrm ) feature flags are disabled in the BIOS by default. Consequently, the 'Milan' CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host. (BZ#2077767) Network traffic performance in virtual machines might be reduced In some cases, RHEL 9.0 guest virtual machines (VMs) have somewhat decreased performance when handling high levels of network traffic. ( BZ#1945040 ) Disabling AVX causes VMs to become unbootable On a host machine that uses a CPU with Advanced Vector Extensions (AVX) support, attempting to boot a VM with AVX explicitly disabled currently fails, and instead triggers a kernel panic in the VM. (BZ#2005173) Failover virtio NICs are not assigned an IP address on Windows virtual machines Currently, when starting a Windows virtual machine (VM) with only a failover virtio NIC, the VM fails to assign an IP address to the NIC. Consequently, the NIC is unable to set up a network connection. Currently, there is no workaround. ( BZ#1969724 ) A hostdev interface with failover settings cannot be hot-plugged after being hot-unplugged After removing a hostdev network interface with failover configuration from a running virtual machine (VM), the interface currently cannot be re-attached to the same running VM. ( BZ#2052424 ) Live post-copy migration of VMs with failover VFs fails Currently, attempting to post-copy migrate a running virtual machine (VM) fails if the VM uses a device with the virtual function (VF) failover capability enabled. To work around the problem, use the standard migration type, rather than post-copy migration. ( BZ#1817965 , BZ#1789206 ) 8.18. RHEL in cloud environments SR-IOV performs suboptimally in ARM 64 RHEL 9 virtual machines on Azure Currently, SR-IOV networking devices have significantly lower throughout and higher latency than expected in ARM 64 RHEL 9 virtual machines VMs running on a Microsoft Azure platform. (BZ#2068432) Mouse is not usable in RHEL 9 VMs on XenServer 7 with console proxy When running a RHEL 9 virtual machine (VM) on a XenServer 7 platform with a console proxy, it is not possible to use the mouse in the VM's GUI. To work around this problem, disable the Wayland compositor protocol in the VM as follows: Open the /etc/gdm/custom.conf file. Uncomment the WaylandEnable=false line. Save the file. In addition, note that Red Hat does not support XenServer as a platform for running RHEL VMs, and discourages using XenServer with RHEL in production environments. (BZ#2019593) Cloning or restoring RHEL 9 virtual machines that use LVM on Nutanix AHV causes non-root partitions to disappear When running a RHEL 9 guest operating system on a virtual machine (VM) hosted on the Nutanix AHV hypervisor, restoring the VM from a snapshot or cloning the VM currently causes non-root partitions in the VM to disappear if the guest is using Logical Volume Management (LVM). As a consequence, the following problems occur: After restoring the VM from a snapshot, the VM cannot boot, and instead enters emergency mode. A VM created by cloning cannot boot, and instead enters emergency mode. To work around these problems, do the following in emergency mode of the VM: Remove the LVM system devices file: rm /etc/lvm/devices/system.devices Recreate LVM device settings: vgimportdevices -a Reboot the VM This makes it possible for the cloned or restored VM to boot up correctly. (BZ#2059545) The SR-IOV functionality of a network adapter attached to a Hyper-V virtual machine might not work Currently, when attaching a network adapter with single-root I/O virtualization (SR-IOV) enabled to a RHEL 9 virtual machine (VM) running on Microsoft Hyper-V hypervisor, the SR-IOV functionality in some cases does not work correctly. To work around this problem, disable SR-IOV in the VM configuration, and then enable it again. In the Hyper-V Manager window, right-click the VM. In the contextual menu, navigate to Settings/Network Adapter/Hardware Acceleration . Uncheck Enable SR-IOV . Click Apply . Repeat steps 1 and 2 to navigate to the Enable SR-IOV option again. Check Enable SR-IOV . Click Apply . (BZ#2030922) Customizing RHEL 9 guests on ESXi sometimes causes networking problems Currently, customizing a RHEL 9 guest operating system in the VMware ESXi hypervisor does not work correctly with NetworkManager key files. As a consequence, if the guest is using such a key file, it will have incorrect network settings, such as the IP address or the gateway. For details and workaround instructions, see the VMware Knowledge Base . (BZ#2037657) 8.19. Supportability Timeout when running sos report on IBM Power Systems, Little Endian When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin's timeout accordingly: For one-time setting, run: For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file: The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin's timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command: (BZ#1869561) 8.20. Containers Container images signed with a Beta GPG key can not be pulled Currently, when you try to pull RHEL 9 Beta container images, podman exits with the error message: Error: Source image rejected: None of the signatures were accepted . The images fail to be pulled due to current builds being configured to not trust the RHEL Beta GPG keys by default. As a workaround, ensure that the Red Hat Beta GPG key is stored on your local system and update the existing trust scope with the podman image trust set command for the appropriate beta namespace. If you do not have the Beta GPG key stored locally, you can pull it by running the following command: To add the Beta GPG key as trusted to your namespace, use one of the following commands: and Replace namespace with ubi9-beta or rhel9-beta . ( BZ#2020026 ) Podman fails to pull a container "X509: certificate signed by unknown authority" If you have your own internal registry signed by our own CA certificate, then you have to import the certificate onto your host machine. Otherwise, an error occurs: Import the CA certificates on your host: Then you can pull container images from the internal registry. ( BZ#2027576 ) Running systemd within an older container image does not work Running systemd within an older container image, for example, centos:7 , does not work: To work around this problem, use the following commands: (JIRA:RHELPLAN-96940) podman system connection add and podman image scp fails Podman uses SHA-1 hashes for the RSA key exchange. The regular SSH connection among machines using RSA keys works, while the podman system connection add and podman image scp commands do not work using the same RSA keys, because the SHA-1 hashes are not accepted for key exchange on RHEL 9: To work around this problem, use the ED25519 keys: Connect to the remote machine: Record ssh destination for the Podman service: Verify that the ssh destination was recorded: Note that with the release of the RHBA-2022:5951 advisory, the problem has been fixed. (JIRA:RHELPLAN-121180)
|
[
"%pre wipefs -a /dev/sda %end",
"ValueError: [digital envelope routines] unsupported",
"10:20:56,416 DDEBUG dnf: RPM transaction over.",
"To have a specific working area directory prefix for Relax-and-Recover specify in /etc/rear/local.conf something like # export TMPDIR=\"/prefix/for/rear/working/directory\" # where /prefix/for/rear/working/directory must already exist. This is useful for example when there is not sufficient free space in /tmp or USDTMPDIR for the ISO image or even the backup archive.",
"mktemp: failed to create file via template '/prefix/for/rear/working/directory/tmp.XXXXXXXXXX': No such file or directory cp: missing destination file operand after '/etc/rear/mappings/mac' Try 'cp --help' for more information. No network interface mapping is specified in /etc/rear/mappings/mac",
"ERROR: Could not create build area",
"dnf install libxkbcommon",
"SignatureAlgorithms = RSA+SHA256:RSA+SHA512:RSA+SHA384:ECDSA+SHA256:ECDSA+SHA512:ECDSA+SHA384 MaxProtocol = TLSv1.2",
"Title: Set SSH Client Alive Count Max to zero CCE Identifier: CCE-90271-8 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_keepalive_0 Title: Set SSH Idle Timeout Interval CCE Identifier: CCE-90811-1 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_idle_timeout",
"dnf install -y ansible-core scap-security-guide rhc-worker-playbook",
"ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -c local -i localhost, rhel9-playbook- cis_server_l1 .yml",
"systemctl disable --now nm-cloud-setup.service nm-cloud-setup.timer",
"nmcli connection show",
"nmcli connection up \"<profile_name>\"",
"grubby --args crashkernel=256M --update-kernel ALL",
"kdumpctl estimate",
"grubby --args=crashkernel=652M --update-kernel=ALL",
"reboot",
"HSCL2957 Either there is currently no RMC connection between the management console and the partition <LPAR name> or the partition does not support dynamic partitioning operations. Verify the network setup on the management console and the partition and ensure that any firewall authentication between the management console and the partition has occurred. Run the management console diagrmc command to identify problems that might be causing no RMC connection.",
"systemctl enable --now blk-availability.service",
"cat /sys/class/fc_host/host12/supported_speeds 16 Gbit, 32 Gbit, 20 Gbit",
"update-crypto-policies --set DEFAULT:SHA1",
"update-crypto-policies --set DEFAULT:SHA1",
"update-crypto-policies --set DEFAULT:SHA1",
"Error: 103 - 9 - 53 - Server is unwilling to perform - [] - need to set nsslapd-referral before moving to referral state",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com dn: cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config changetype: modify add: nsslapd-referral nsslapd-referral: ldap://remote_server:389/dc=example,dc=com",
"dsconf <instance_name> backend suffix set --state referral",
"ldapadd -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn=entryuuid_fixup___<time_stamp__,cn=entryuuid task,cn=tasks,cn=config objectClass: top objectClass: extensibleObject basedn: __<fixup base tree>__ cn: entryuuid_fixup___<time_stamp>__ filter: __<filtered_entry>__",
"systemctl enable --now vncserver@: port-number",
"sos report -k processor.timeout=1800",
"Specify any plugin options and their values here. These options take the form plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800",
"time sos report -o processor -k processor.timeout=0 --batch --build",
"sudo wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta https://www.redhat.com/security/data/f21541eb.txt",
"sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta registry.access.redhat.com/ namespace",
"sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta registry.redhat.io/ namespace",
"x509: certificate signed by unknown authority",
"cd /etc/pki/ca-trust/source/anchors/ curl -O <your_certificate>.crt update-ca-trust",
"podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.",
"mkdir /sys/fs/cgroup/systemd mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd",
"podman system connection add --identity ~/.ssh/id_rsa test_connection USDREMOTE_SSH_MACHINE Error: failed to connect: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain",
"ssh -i ~/.ssh/id_ed25519 USDREMOTE_SSH_MACHINE",
"podman system connection add --identity ~/.ssh/id_ed25519 test_connection USDREMOTE_SSH_MACHINE",
"podman system connection list"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.0_release_notes/known-issues
|
Chapter 2. Requirements for bare metal provisioning
|
Chapter 2. Requirements for bare metal provisioning To enable cloud users to launch bare-metal instances, your Red Hat OpenStack Services on OpenShift (RHOSO) environment must have the required hardware and network configuration. 2.1. Hardware requirements The hardware requirements for the bare-metal machines that you want to make available to your cloud users for provisioning depend on the operating system. For information about the hardware requirements for Red Hat Enterprise Linux installations, see the Product Documentation for Red Hat Enterprise Linux . All bare-metal machines that you want to make available to your cloud users for provisioning must have the following capabilities: A NIC to connect to the bare-metal network. The Redfish power management type, which is connected to a network that is reachable from the ironic-conductor container. Note Do not use the IPMI power management type due to security concerns. Use of Redfish as the power management type optimizes the performance of the Bare Metal Provisioning service. If the Bare Metal Provisioning service is configured to use PXE or iPXE for provisioning, then PXE boot must be enabled on the network interface that is attached to the bare-metal network, and disabled on all other network interfaces for that bare-metal node. This is not a requirement if the Bare Metal Provisioning service is configured to use virtual media for provisioning. If the Bare Metal Provisioning service is configured to use virtual media for provisioning, through Redfish or a vendor-specific boot interface on each node, then the bare-metal nodes must be able to reach cluster resources for virtual media disks or other disk images. 2.2. Networking requirements The cloud operator must create a private bare-metal network for the Bare Metal Provisioning service to use for the following operations: The provisioning and management of the bare-metal nodes that host the bare-metal instances. Cleaning bare-metal nodes when a node is unprovisioned. Project access to the bare-metal nodes. In order for the Bare Metal Provisioning service to serve PXE boot and DHCP requests, the bare-metal node must be attached either to a port that does not use a VLAN, or to a port that is a VLAN trunk where the native VLAN is the bare-metal network. The Bare Metal Provisioning service is designed for a trusted tenant environment because the bare-metal nodes have direct access to the control plane network of your Red Hat OpenStack Services on OpenShift (RHOSO) environment. Cloud users have direct access to the public OpenStack APIs, and to the bare-metal network. A flat bare-metal network can introduce security concerns because cloud users have indirect access to the control plane network. To mitigate this risk, you can configure an isolated bare metal provisioning network for the Bare Metal Provisioning service that does not have access to the control plane. The bare-metal network must be untagged for provisioning, and must also have access to the Bare Metal Provisioning API. You must provide access to the bare-metal network for the following: The control plane that hosts the Bare Metal Provisioning service. The NIC from which the bare-metal machine is configured to PXE-boot.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_bare_metal_provisioning_service/assembly_requirements-for-bare-metal-provisioning
|
Appendix C. Application development resources
|
Appendix C. Application development resources For additional information about application development with OpenShift, see: OpenShift Interactive Learning Portal To reduce network load and shorten the build time of your application, set up a Nexus mirror for Maven on your OpenShift Container Platform: Setting Up a Nexus Mirror for Maven
| null |
https://docs.redhat.com/en/documentation/red_hat_support_for_spring_boot/2.7/html/dekorate_guide_for_spring_boot_developers/application-development-resources
|
Troubleshooting Guide
|
Troubleshooting Guide Red Hat Ceph Storage 8 Troubleshooting Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team
|
[
"cephadm shell",
"ceph health detail",
"ceph -W cephadm",
"ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true",
"cephadm shell",
"ceph health detail HEALTH_WARN 1 osds down; 1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set [WRN] OSD_DOWN: 1 osds down osd.1 (root=default,host=host01) is down [WRN] OSD_FLAGS: 1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set osd.1 has flags noup",
"ceph health mute HEALTH_MESSAGE",
"ceph health mute OSD_DOWN",
"ceph health mute HEALTH_MESSAGE DURATION",
"ceph health mute OSD_DOWN 10m",
"ceph -s cluster: id: 81a4597a-b711-11eb-8cb8-001a4a000740 health: HEALTH_OK (muted: OSD_DOWN(9m) OSD_FLAGS(9m)) services: mon: 3 daemons, quorum host01,host02,host03 (age 33h) mgr: host01.pzhfuh(active, since 33h), standbys: host02.wsnngf, host03.xwzphg osd: 11 osds: 10 up (since 4m), 11 in (since 5d) data: pools: 1 pools, 1 pgs objects: 13 objects, 0 B usage: 85 MiB used, 165 GiB / 165 GiB avail pgs: 1 active+clean",
"ceph health mute HEALTH_MESSAGE DURATION --sticky",
"ceph health mute OSD_DOWN 1h --sticky",
"ceph health unmute HEALTH_MESSAGE",
"ceph health unmute OSD_DOWN",
"dnf install sos",
"sos report -a --all-logs",
"sos report --all-logs -e ceph_mgr,ceph_common,ceph_mon,ceph_osd,ceph_ansible,ceph_mds,ceph_rgw",
"debug_ms = 5 debug_mon = 20 debug_paxos = 20 debug_auth = 20",
"2022-05-12 12:37:04.278761 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 e322: 2 osds: 2 up, 2 in 2022-05-12 12:37:04.278792 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 min_last_epoch_clean 322 2022-05-12 12:37:04.278795 7f45a9afc700 10 mon.cephn2@0(leader).log v1010106 log 2022-05-12 12:37:04.278799 7f45a9afc700 10 mon.cephn2@0(leader).auth v2877 auth 2022-05-12 12:37:04.278811 7f45a9afc700 20 mon.cephn2@0(leader) e1 sync_trim_providers 2022-05-12 12:37:09.278914 7f45a9afc700 11 mon.cephn2@0(leader) e1 tick 2022-05-12 12:37:09.278949 7f45a9afc700 10 mon.cephn2@0(leader).pg v8126 v8126: 64 pgs: 64 active+clean; 60168 kB data, 172 MB used, 20285 MB / 20457 MB avail 2022-05-12 12:37:09.278975 7f45a9afc700 10 mon.cephn2@0(leader).paxosservice(pgmap 7511..8126) maybe_trim trim_to 7626 would only trim 115 < paxos_service_trim_min 250 2022-05-12 12:37:09.278982 7f45a9afc700 10 mon.cephn2@0(leader).osd e322 e322: 2 osds: 2 up, 2 in 2022-05-12 12:37:09.278989 7f45a9afc700 5 mon.cephn2@0(leader).paxos(paxos active c 1028850..1029466) is_readable = 1 - now=2021-08-12 12:37:09.278990 lease_expire=0.000000 has v0 lc 1029466 . 2022-05-12 12:59:18.769963 7f45a92fb700 1 -- 192.168.0.112:6789/0 <== osd.1 192.168.0.114:6800/2801 5724 ==== pg_stats(0 pgs tid 3045 v 0) v1 ==== 124+0+0 (2380105412 0 0) 0x5d96300 con 0x4d5bf40 2022-05-12 12:59:18.770053 7f45a92fb700 1 -- 192.168.0.112:6789/0 --> 192.168.0.114:6800/2801 -- pg_stats_ack(0 pgs tid 3045) v1 -- ?+0 0x550ae00 con 0x4d5bf40 2022-05-12 12:59:32.916397 7f45a9afc700 0 mon.cephn2@0(leader).data_health(1) update_stats avail 53% total 1951 MB, used 780 MB, avail 1053 MB . 2022-05-12 13:01:05.256263 7f45a92fb700 1 -- 192.168.0.112:6789/0 --> 192.168.0.113:6800/2410 -- mon_subscribe_ack(300s) v1 -- ?+0 0x4f283c0 con 0x4d5b440",
"debug_ms = 5 debug_osd = 20",
"2022-05-12 11:27:53.869151 7f5d55d84700 1 -- 192.168.17.3:0/2410 --> 192.168.17.4:6801/2801 -- osd_ping(ping e322 stamp 2021-08-12 11:27:53.869147) v2 -- ?+0 0x63baa00 con 0x578dee0 2022-05-12 11:27:53.869214 7f5d55d84700 1 -- 192.168.17.3:0/2410 --> 192.168.0.114:6801/2801 -- osd_ping(ping e322 stamp 2021-08-12 11:27:53.869147) v2 -- ?+0 0x638f200 con 0x578e040 2022-05-12 11:27:53.870215 7f5d6359f700 1 -- 192.168.17.3:0/2410 <== osd.1 192.168.0.114:6801/2801 109210 ==== osd_ping(ping_reply e322 stamp 2021-08-12 11:27:53.869147) v2 ==== 47+0+0 (261193640 0 0) 0x63c1a00 con 0x578e040 2022-05-12 11:27:53.870698 7f5d6359f700 1 -- 192.168.17.3:0/2410 <== osd.1 192.168.17.4:6801/2801 109210 ==== osd_ping(ping_reply e322 stamp 2021-08-12 11:27:53.869147) v2 ==== 47+0+0 (261193640 0 0) 0x6313200 con 0x578dee0 . 2022-05-12 11:28:10.432313 7f5d6e71f700 5 osd.0 322 tick 2022-05-12 11:28:10.432375 7f5d6e71f700 20 osd.0 322 scrub_random_backoff lost coin flip, randomly backing off 2022-05-12 11:28:10.432381 7f5d6e71f700 10 osd.0 322 do_waiters -- start 2022-05-12 11:28:10.432383 7f5d6e71f700 10 osd.0 322 do_waiters -- finish",
"ceph tell TYPE . ID injectargs --debug- SUBSYSTEM VALUE [-- NAME VALUE ]",
"ceph tell osd.0 injectargs --debug-osd 0/5",
"ceph daemon NAME config show | less",
"ceph daemon osd.0 config show | less",
"[global] debug_ms = 1/5 [mon] debug_mon = 20 debug_paxos = 1/5 debug_auth = 2 [osd] debug_osd = 1/5 debug_monc = 5/20 [mds] debug_mds = 1",
"rotate 7 weekly size SIZE compress sharedscripts",
"rotate 7 weekly size 500 MB compress sharedscripts size 500M",
"crontab -e",
"30 * * * * /usr/sbin/logrotate /etc/logrotate.d/ceph-d3bb5396-c404-11ee-9e65-002590fc2a2e >/dev/null 2>&1",
"logrotate -f",
"logrotate -f /etc/logrotate.d/ceph-12ab345c-1a2b-11ed-b736-fa163e4f6220",
"ll LOG_LOCATION",
"ll /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220 -rw-r--r--. 1 ceph ceph 412 Sep 28 09:26 opslog.log.1.gz",
"/usr/local/bin/s3cmd ls",
"/usr/local/bin/s3cmd mb s3:// NEW_BUCKET_NAME",
"/usr/local/bin/s3cmd mb s3://bucket1 Bucket `s3://bucket1` created",
"ll LOG_LOCATION",
"ll /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220 total 852 -rw-r--r--. 1 ceph ceph 920 Jun 29 02:17 opslog.log -rw-r--r--. 1 ceph ceph 412 Jun 28 09:26 opslog.log.1.gz",
"tail -f LOG_LOCATION /opslog.log",
"tail -f /var/log/ceph/12ab345c-1a2b-11ed-b736-fa163e4f6220/opslog.log {\"bucket\":\"\",\"time\":\"2022-09-29T06:17:03.133488Z\",\"time_local\":\"2022-09- 29T06:17:03.133488+0000\",\"remote_addr\":\"10.0.211.66\",\"user\":\"test1\", \"operation\":\"list_buckets\",\"uri\":\"GET / HTTP/1.1\",\"http_status\":\"200\",\"error_code\":\"\",\"bytes_sent\":232, \"bytes_received\":0,\"object_size\":0,\"total_time\":9,\"user_agent\":\"\",\"referrer\": \"\",\"trans_id\":\"tx00000c80881a9acd2952a-006335385f-175e5-primary\", \"authentication_type\":\"Local\",\"access_key_id\":\"1234\",\"temp_url\":false} {\"bucket\":\"cn1\",\"time\":\"2022-09-29T06:17:10.521156Z\",\"time_local\":\"2022-09- 29T06:17:10.521156+0000\",\"remote_addr\":\"10.0.211.66\",\"user\":\"test1\", \"operation\":\"create_bucket\",\"uri\":\"PUT /cn1/ HTTP/1.1\",\"http_status\":\"200\",\"error_code\":\"\",\"bytes_sent\":0, \"bytes_received\":0,\"object_size\":0,\"total_time\":106,\"user_agent\":\"\", \"referrer\":\"\",\"trans_id\":\"tx0000058d60c593632c017-0063353866-175e5-primary\", \"authentication_type\":\"Local\",\"access_key_id\":\"1234\",\"temp_url\":false}",
"dnf install net-tools dnf install telnet",
"cat /etc/ceph/ceph.conf minimal ceph.conf for 57bddb48-ee04-11eb-9962-001a4a000672 [global] fsid = 57bddb48-ee04-11eb-9962-001a4a000672 mon_host = [v2:10.74.249.26:3300/0,v1:10.74.249.26:6789/0] [v2:10.74.249.163:3300/0,v1:10.74.249.163:6789/0] [v2:10.74.254.129:3300/0,v1:10.74.254.129:6789/0] [mon.host01] public network = 10.74.248.0/21",
"ip link list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:1a:4a:00:06:72 brd ff:ff:ff:ff:ff:ff",
"ping SHORT_HOST_NAME",
"ping host02",
"firewall-cmd --info-zone= ZONE telnet IP_ADDRESS PORT",
"firewall-cmd --info-zone=public public (active) target: default icmp-block-inversion: no interfaces: ens3 sources: services: ceph ceph-mon cockpit dhcpv6-client ssh ports: 9283/tcp 8443/tcp 9093/tcp 9094/tcp 3000/tcp 9100/tcp 9095/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: telnet 192.168.0.22 9100",
"ethtool -S INTERFACE",
"ethtool -S ens3 | grep errors NIC statistics: rx_fcs_errors: 0 rx_align_errors: 0 rx_frame_too_long_errors: 0 rx_in_length_errors: 0 rx_out_length_errors: 0 tx_mac_errors: 0 tx_carrier_sense_errors: 0 tx_errors: 0 rx_errors: 0",
"ifconfig ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.74.249.26 netmask 255.255.248.0 broadcast 10.74.255.255 inet6 fe80::21a:4aff:fe00:672 prefixlen 64 scopeid 0x20<link> inet6 2620:52:0:4af8:21a:4aff:fe00:672 prefixlen 64 scopeid 0x0<global> ether 00:1a:4a:00:06:72 txqueuelen 1000 (Ethernet) RX packets 150549316 bytes 56759897541 (52.8 GiB) RX errors 0 dropped 176924 overruns 0 frame 0 TX packets 55584046 bytes 62111365424 (57.8 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 9373290 bytes 16044697815 (14.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9373290 bytes 16044697815 (14.9 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0",
"netstat -ai Kernel Interface table Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg ens3 1500 311847720 0 364903 0 114341918 0 0 0 BMRU lo 65536 19577001 0 0 0 19577001 0 0 0 LRU",
"dnf install iperf3",
"iperf3 -s ----------------------------------------------------------- Server listening on 5201 -----------------------------------------------------------",
"iperf3 -c mon Connecting to host mon, port 5201 [ 4] local xx.x.xxx.xx port 52270 connected to xx.x.xxx.xx port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 114 MBytes 954 Mbits/sec 0 409 KBytes [ 4] 1.00-2.00 sec 113 MBytes 945 Mbits/sec 0 409 KBytes [ 4] 2.00-3.00 sec 112 MBytes 943 Mbits/sec 0 454 KBytes [ 4] 3.00-4.00 sec 112 MBytes 941 Mbits/sec 0 471 KBytes [ 4] 4.00-5.00 sec 112 MBytes 940 Mbits/sec 0 471 KBytes [ 4] 5.00-6.00 sec 113 MBytes 945 Mbits/sec 0 471 KBytes [ 4] 6.00-7.00 sec 112 MBytes 937 Mbits/sec 0 488 KBytes [ 4] 7.00-8.00 sec 113 MBytes 947 Mbits/sec 0 520 KBytes [ 4] 8.00-9.00 sec 112 MBytes 939 Mbits/sec 0 520 KBytes [ 4] 9.00-10.00 sec 112 MBytes 939 Mbits/sec 0 520 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 0 sender [ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver iperf Done.",
"ethtool INTERFACE",
"ethtool ens3 Settings for ens3: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Link partner advertised auto-negotiation: Yes Link partner advertised FEC modes: Not reported Speed: 1000Mb/s 1 Duplex: Full 2 Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: off Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes 3",
"systemctl status chronyd",
"systemctl enable chronyd systemctl start chronyd",
"chronyc sources chronyc sourcestats chronyc tracking",
"HEALTH_WARN 1 mons down, quorum 1,2 mon.b,mon.c mon.a (rank 0) addr 127.0.0.1:6789/0 is down (out of quorum)",
"systemctl status ceph- FSID @ DAEMON_NAME systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl status [email protected] systemctl start [email protected]",
"Corruption: error in middle of record Corruption: 1 missing files; example: /var/lib/ceph/mon/mon.0/store.db/1234567.ldb",
"Caught signal (Bus error)",
"ceph daemon ID mon_status",
"ceph daemon mon.host01 mon_status",
"mon.a (rank 0) addr 127.0.0.1:6789/0 is down (out of quorum) mon.a addr 127.0.0.1:6789/0 clock skew 0.08235s > max 0.05s (latency 0.0045s)",
"2022-05-04 07:28:32.035795 7f806062e700 0 log [WRN] : mon.a 127.0.0.1:6789/0 clock skew 0.14s > max 0.05s 2022-05-04 04:31:25.773235 7f4997663700 0 log [WRN] : message from mon.1 was stamped 0.186257s in the future, clocks not synchronized",
"mon.ceph1 store is getting too big! 48031 MB >= 15360 MB -- 62% avail",
"du -sch /var/lib/ceph/ CLUSTER_FSID /mon. HOST_NAME /store.db/",
"du -sh /var/lib/ceph/b341e254-b165-11ed-a564-ac1f6bb26e8c/mon.host01/ 109M /var/lib/ceph/b341e254-b165-11ed-a564-ac1f6bb26e8c/mon.host01/ 47G /var/lib/ceph/mon/ceph-ceph1/store.db/ 47G total",
"{ \"name\": \"mon.3\", \"rank\": 2, \"state\": \"peon\", \"election_epoch\": 96, \"quorum\": [ 1, 2 ], \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 1, \"fsid\": \"d5552d32-9d1d-436c-8db1-ab5fc2c63cd0\", \"modified\": \"0.000000\", \"created\": \"0.000000\", \"mons\": [ { \"rank\": 0, \"name\": \"mon.1\", \"addr\": \"172.25.1.10:6789\\/0\" }, { \"rank\": 1, \"name\": \"mon.2\", \"addr\": \"172.25.1.12:6789\\/0\" }, { \"rank\": 2, \"name\": \"mon.3\", \"addr\": \"172.25.1.13:6789\\/0\" } ] } }",
"ceph mon getmap -o /tmp/monmap",
"systemctl stop ceph- FSID @ DAEMON_NAME",
"systemctl stop [email protected]",
"ceph-mon -i ID --extract-monmap /tmp/monmap",
"ceph-mon -i mon.a --extract-monmap /tmp/monmap",
"systemctl stop ceph- FSID @ DAEMON_NAME",
"systemctl stop [email protected]",
"ceph-mon -i ID --inject-monmap /tmp/monmap",
"ceph-mon -i mon.host01 --inject-monmap /tmp/monmap",
"systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]",
"systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]",
"rm -rf /var/lib/ceph/mon/ CLUSTER_NAME - SHORT_HOST_NAME",
"rm -rf /var/lib/ceph/mon/remote-host1",
"ceph mon remove SHORT_HOST_NAME --cluster CLUSTER_NAME",
"ceph mon remove host01 --cluster remote",
"ceph tell mon. HOST_NAME compact",
"ceph tell mon.host01 compact",
"[mon] mon_compact_on_start = true",
"systemctl restart ceph- FSID @ DAEMON_NAME",
"systemctl restart [email protected]",
"ceph mon stat",
"systemctl status ceph- FSID @ DAEMON_NAME systemctl stop ceph- FSID @ DAEMON_NAME",
"systemctl status [email protected] systemctl stop [email protected]",
"ceph-monstore-tool /var/lib/ceph/ CLUSTER_FSID /mon. HOST_NAME compact",
"ceph-monstore-tool /var/lib/ceph/b404c440-9e4c-11ec-a28a-001a4a0001df/mon.host01 compact",
"systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]",
"firewall-cmd --add-port 6800-7300/tcp firewall-cmd --add-port 6800-7300/tcp --permanent",
"Corruption: error in middle of record Corruption: 1 missing files; e.g.: /var/lib/ceph/mon/mon.0/store.db/1234567.ldb",
"ceph-volume lvm list",
"mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-USDi",
"for i in { OSD_ID }; do restorecon /var/lib/ceph/osd/ceph-USDi; done",
"for i in { OSD_ID }; do chown -R ceph:ceph /var/lib/ceph/osd/ceph-USDi; done",
"ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev OSD-DATA --path /var/lib/ceph/osd/ceph- OSD-ID",
"ln -snf BLUESTORE DATABASE /var/lib/ceph/osd/ceph- OSD-ID /block.db",
"cd /root/ ms=/tmp/monstore/ db=/root/db/ db_slow=/root/db.slow/ mkdir USDms for host in USDosd_nodes; do echo \"USDhost\" rsync -avz USDms USDhost:USDms rsync -avz USDdb USDhost:USDdb rsync -avz USDdb_slow USDhost:USDdb_slow rm -rf USDms rm -rf USDdb rm -rf USDdb_slow sh -t USDhost <<EOF for osd in /var/lib/ceph/osd/ceph-*; do ceph-objectstore-tool --type bluestore --data-path \\USDosd --op update-mon-db --mon-store-path USDms done EOF rsync -avz USDhost:USDms USDms rsync -avz USDhost:USDdb USDdb rsync -avz USDhost:USDdb_slow USDdb_slow done",
"ceph-authtool /etc/ceph/ceph.client.admin.keyring -n mon. --cap mon 'allow *' --gen-key cat /etc/ceph/ceph.client.admin.keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"",
"mv /root/db/*.sst /root/db.slow/*.sst /tmp/monstore/store.db",
"ceph-monstore-tool /tmp/monstore rebuild -- --keyring /etc/ceph/ceph.client.admin",
"mv /var/lib/ceph/mon/ceph- HOSTNAME /store.db /var/lib/ceph/mon/ceph- HOSTNAME /store.db.corrupted",
"scp -r /tmp/monstore/store.db HOSTNAME :/var/lib/ceph/mon/ceph- HOSTNAME /",
"chown -R ceph:ceph /var/lib/ceph/mon/ceph- HOSTNAME /store.db",
"umount /var/lib/ceph/osd/ceph-*",
"systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]",
"ceph -s",
"ceph auth import -i /etc/ceph/ceph.mgr. HOSTNAME .keyring systemctl start ceph- FSID @ DAEMON_NAME",
"systemctl start ceph-b341e254-b165-11ed-a564-ac1f6bb26e8c@mgr.extensa003.exrqql.service",
"systemctl start ceph- FSID @osd. OSD_ID",
"systemctl start [email protected]",
"ceph -s",
"HEALTH_ERR 1 full osds osd.3 is full at 95%",
"ceph df",
"health: HEALTH_WARN 3 backfillfull osd(s) Low space hindering backfill (add storage if this doesn't resolve itself): 32 pgs backfill_toofull",
"ceph df",
"ceph osd set-backfillfull-ratio VALUE",
"ceph osd set-backfillfull-ratio 0.92",
"HEALTH_WARN 1 nearfull osds osd.2 is near full at 85%",
"ceph osd df",
"df",
"HEALTH_WARN 1/3 in osds are down",
"ceph health detail HEALTH_WARN 1/3 in osds are down osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080",
"systemctl restart ceph- FSID @osd. OSD_ID",
"systemctl restart [email protected]",
"FAILED assert(0 == \"hit suicide timeout\")",
"dmesg",
"xfs_log_force: error -5 returned",
"Caught signal (Segmentation fault)",
"wrongly marked me down heartbeat_check: no reply from osd.2 since back",
"ceph -w | grep osds 2022-05-05 06:27:20.810535 mon.0 [INF] osdmap e609: 9 osds: 8 up, 9 in 2022-05-05 06:27:24.120611 mon.0 [INF] osdmap e611: 9 osds: 7 up, 9 in 2022-05-05 06:27:25.975622 mon.0 [INF] HEALTH_WARN; 118 pgs stale; 2/9 in osds are down 2022-05-05 06:27:27.489790 mon.0 [INF] osdmap e614: 9 osds: 6 up, 9 in 2022-05-05 06:27:36.540000 mon.0 [INF] osdmap e616: 9 osds: 7 up, 9 in 2022-05-05 06:27:39.681913 mon.0 [INF] osdmap e618: 9 osds: 8 up, 9 in 2022-05-05 06:27:43.269401 mon.0 [INF] osdmap e620: 9 osds: 9 up, 9 in 2022-05-05 06:27:54.884426 mon.0 [INF] osdmap e622: 9 osds: 8 up, 9 in 2022-05-05 06:27:57.398706 mon.0 [INF] osdmap e624: 9 osds: 7 up, 9 in 2022-05-05 06:27:59.669841 mon.0 [INF] osdmap e625: 9 osds: 6 up, 9 in 2022-05-05 06:28:07.043677 mon.0 [INF] osdmap e628: 9 osds: 7 up, 9 in 2022-05-05 06:28:10.512331 mon.0 [INF] osdmap e630: 9 osds: 8 up, 9 in 2022-05-05 06:28:12.670923 mon.0 [INF] osdmap e631: 9 osds: 9 up, 9 in",
"2022-05-25 03:44:06.510583 osd.50 127.0.0.1:6801/149046 18992 : cluster [WRN] map e600547 wrongly marked me down",
"2022-05-25 19:00:08.906864 7fa2a0033700 -1 osd.254 609110 heartbeat_check: no reply from osd.2 since back 2021-07-25 19:00:07.444113 front 2021-07-25 18:59:48.311935 (cutoff 2021-07-25 18:59:48.906862)",
"ceph health detail HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests 30 ops are blocked > 268435 sec 1 ops are blocked > 268435 sec on osd.11 1 ops are blocked > 268435 sec on osd.18 28 ops are blocked > 268435 sec on osd.39 3 osds have slow requests",
"ceph osd tree | grep down",
"ceph osd set noup ceph osd set nodown",
"HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests 30 ops are blocked > 268435 sec 1 ops are blocked > 268435 sec on osd.11 1 ops are blocked > 268435 sec on osd.18 28 ops are blocked > 268435 sec on osd.39 3 osds have slow requests",
"2022-05-24 13:18:10.024659 osd.1 127.0.0.1:6812/3032 9 : cluster [WRN] 6 slow requests, 6 included below; oldest blocked for > 61.758455 secs",
"2022-05-25 03:44:06.510583 osd.50 [WRN] slow request 30.005692 seconds old, received at {date-time}: osd_op(client.4240.0:8 benchmark_data_ceph-1_39426_object7 [write 0~4194304] 0.69848840) v4 currently waiting for subops from [610]",
"cephadm shell",
"ceph osd set noout",
"ceph osd unset noout",
"HEALTH_WARN 1/3 in osds are down osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080",
"cephadm shell",
"ceph osd tree | grep -i down ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF 0 hdd 0.00999 osd.0 down 1.00000 1.00000",
"ceph osd out OSD_ID .",
"ceph osd out osd.0 marked out osd.0.",
"ceph -w | grep backfill 2022-05-02 04:48:03.403872 mon.0 [INF] pgmap v10293282: 431 pgs: 1 active+undersized+degraded+remapped+backfilling, 28 active+undersized+degraded, 49 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 294 active+clean; 72347 MB data, 101302 MB used, 1624 GB / 1722 GB avail; 227 kB/s rd, 1358 B/s wr, 12 op/s; 10626/35917 objects degraded (29.585%); 6757/35917 objects misplaced (18.813%); 63500 kB/s, 15 objects/s recovering 2022-05-02 04:48:04.414397 mon.0 [INF] pgmap v10293283: 431 pgs: 2 active+undersized+degraded+remapped+backfilling, 75 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 295 active+clean; 72347 MB data, 101398 MB used, 1623 GB / 1722 GB avail; 969 kB/s rd, 6778 B/s wr, 32 op/s; 10626/35917 objects degraded (29.585%); 10580/35917 objects misplaced (29.457%); 125 MB/s, 31 objects/s recovering 2022-05-02 04:48:00.380063 osd.1 [INF] 0.6f starting backfill to osd.0 from (0'0,0'0] MAX to 2521'166639 2022-05-02 04:48:00.380139 osd.1 [INF] 0.48 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'43079 2022-05-02 04:48:00.380260 osd.1 [INF] 0.d starting backfill to osd.0 from (0'0,0'0] MAX to 2513'136847 2022-05-02 04:48:00.380849 osd.1 [INF] 0.71 starting backfill to osd.0 from (0'0,0'0] MAX to 2331'28496 2022-05-02 04:48:00.381027 osd.1 [INF] 0.51 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'87544",
"ceph orch daemon stop OSD_ID",
"ceph orch daemon stop osd.0",
"ceph orch osd rm OSD_ID --replace",
"ceph orch osd rm 0 --replace",
"ceph orch apply osd --all-available-devices",
"ceph orch apply osd --all-available-devices --unmanaged=true",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph osd tree",
"sysctl -w kernel.pid.max=4194303",
"kernel.pid.max = 4194303",
"cephadm shell",
"ceph osd dump | grep -i full full_ratio 0.95",
"ceph osd set-full-ratio 0.97",
"ceph osd dump | grep -i full full_ratio 0.97",
"ceph -w",
"ceph osd set-full-ratio 0.95",
"ceph osd dump | grep -i full full_ratio 0.95",
"radosgw-admin user info --uid SYNCHRONIZATION_USER, and radosgw-admin zone get",
"radosgw-admin sync status",
"radosgw-admin data sync status --shard-id= X --source-zone= ZONE_NAME",
"radosgw-admin data sync status --shard-id=27 --source-zone=us-east { \"shard_id\": 27, \"marker\": { \"status\": \"incremental-sync\", \"marker\": \"1_1534494893.816775_131867195.1\", \"next_step_marker\": \"\", \"total_entries\": 1, \"pos\": 0, \"timestamp\": \"0.000000\" }, \"pending_buckets\": [], \"recovering_buckets\": [ \"pro-registry:4ed07bb2-a80b-4c69-aa15-fdc17ae6f5f2.314303.1:26\" ] }",
"radosgw-admin bucket sync status --bucket= X .",
"radosgw-admin sync error list",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw. RGW_ID .asok perf dump data-sync-from- ZONE_NAME",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw.host02-rgw0.103.94309060818504.asok perf dump data-sync-from-us-west { \"data-sync-from-us-west\": { \"fetch bytes\": { \"avgcount\": 54, \"sum\": 54526039885 }, \"fetch not modified\": 7, \"fetch errors\": 0, \"poll latency\": { \"avgcount\": 41, \"sum\": 2.533653367, \"avgtime\": 0.061796423 }, \"poll errors\": 0 } }",
"radosgw-admin sync status realm d713eec8-6ec4-4f71-9eaf-379be18e551b (india) zonegroup ccf9e0b2-df95-4e0a-8933-3b17b64c52b7 (shared) zone 04daab24-5bbd-4c17-9cf5-b1981fd7ff79 (primary) current time 2022-09-15T06:53:52Z zonegroup features enabled: resharding metadata sync no sync (zone is master) data sync source: 596319d2-4ffe-4977-ace1-8dd1790db9fb (secondary) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source",
"radosgw-admin data sync init --source-zone primary",
"ceph orch restart rgw.myrgw",
"2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to decode obj from .rgw.root:periods.91d2a42c-735b-492a-bcf3-05235ce888aa.3 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 failed reading current period info: (5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to start notify service ((5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to init services (ret=(5) Input/output error) couldn't init storage provider",
"date;radosgw-admin bucket list Mon May 13 09:05:30 UTC 2024 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to decode obj from .rgw.root:periods.91d2a42c-735b-492a-bcf3-05235ce888aa.3 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 failed reading current period info: (5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to start notify service ((5) Input/output error 2024-05-13T09:05:30.607+0000 7f4e7c4ea500 0 ERROR: failed to init services (ret=(5) Input/output error) couldn't init storage provider",
"cephadm shell --radosgw-admin COMMAND",
"cephadm shell -- radosgw-admin bucket list",
"HEALTH_WARN 24 pgs stale; 3/300 in osds are down",
"ceph health detail HEALTH_WARN 24 pgs stale; 3/300 in osds are down pg 2.5 is stuck stale+active+remapped, last acting [2,0] osd.10 is down since epoch 23, last address 192.168.106.220:6800/11080 osd.11 is down since epoch 13, last address 192.168.106.220:6803/11539 osd.12 is down since epoch 24, last address 192.168.106.220:6806/11861",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"cephadm shell",
"ceph health detail HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"ceph pg deep-scrub ID",
"ceph pg deep-scrub 0.6 instructing pg 0.6 on osd.0 to deep-scrub",
"ceph -w | grep ID",
"ceph -w | grep 0.6 2022-05-26 01:35:36.778215 osd.106 [ERR] 0.6 deep-scrub stat mismatch, got 636/635 objects, 0/0 clones, 0/0 dirty, 0/0 omap, 0/0 hit_set_archive, 0/0 whiteouts, 1855455/1854371 bytes. 2022-05-26 01:35:36.788334 osd.106 [ERR] 0.6 deep-scrub 1 errors",
"PG . ID shard OSD : soid OBJECT missing attr , missing attr _ATTRIBUTE_TYPE PG . ID shard OSD : soid OBJECT digest 0 != known digest DIGEST , size 0 != known size SIZE PG . ID shard OSD : soid OBJECT size 0 != known size SIZE PG . ID deep-scrub stat mismatch, got MISMATCH PG . ID shard OSD : soid OBJECT candidate had a read error, digest 0 != known digest DIGEST",
"PG . ID shard OSD : soid OBJECT digest DIGEST != known digest DIGEST PG . ID shard OSD : soid OBJECT omap_digest DIGEST != known omap_digest DIGEST",
"HEALTH_WARN 197 pgs stuck unclean",
"ceph osd tree",
"HEALTH_WARN 197 pgs stuck inactive",
"ceph osd tree",
"HEALTH_ERR 7 pgs degraded; 12 pgs down; 12 pgs peering; 1 pgs recovering; 6 pgs stuck unclean; 114/3300 degraded (3.455%); 1/3 in osds are down pg 0.5 is down+peering pg 1.4 is down+peering osd.1 is down since epoch 69, last address 192.168.106.220:6801/8651",
"ceph pg ID query",
"ceph pg 0.5 query { \"state\": \"down+peering\", \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Peering\\/GetInfo\", \"enter_time\": \"2021-08-06 14:40:16.169679\", \"requested_info_from\": []}, { \"name\": \"Started\\/Primary\\/Peering\", \"enter_time\": \"2021-08-06 14:40:16.169659\", \"probing_osds\": [ 0, 1], \"blocked\": \"peering is blocked due to down osds\", \"down_osds_we_would_probe\": [ 1], \"peering_blocked_by\": [ { \"osd\": 1, \"current_lost_at\": 0, \"comment\": \"starting or marking this osd lost may let us proceed\"}]}, { \"name\": \"Started\", \"enter_time\": \"2021-08-06 14:40:16.169513\"} ] }",
"HEALTH_WARN 1 pgs degraded; 78/3778 unfound (2.065%)",
"cephadm shell",
"ceph health detail HEALTH_WARN 1 pgs recovering; 1 pgs stuck unclean; recovery 5/937611 objects degraded (0.001%); 1/312537 unfound (0.000%) pg 3.8a5 is stuck unclean for 803946.712780, current state active+recovering, last acting [320,248,0] pg 3.8a5 is active+recovering, acting [320,248,0], 1 unfound recovery 5/937611 objects degraded (0.001%); **1/312537 unfound (0.000%)**",
"ceph pg ID query",
"ceph pg 3.8a5 query { \"state\": \"active+recovering\", \"epoch\": 10741, \"up\": [ 320, 248, 0], \"acting\": [ 320, 248, 0], <snip> \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Active\", \"enter_time\": \"2021-08-28 19:30:12.058136\", \"might_have_unfound\": [ { \"osd\": \"0\", \"status\": \"already probed\"}, { \"osd\": \"248\", \"status\": \"already probed\"}, { \"osd\": \"301\", \"status\": \"already probed\"}, { \"osd\": \"362\", \"status\": \"already probed\"}, { \"osd\": \"395\", \"status\": \"already probed\"}, { \"osd\": \"429\", \"status\": \"osd is down\"}], \"recovery_progress\": { \"backfill_targets\": [], \"waiting_on_backfill\": [], \"last_backfill_started\": \"0\\/\\/0\\/\\/-1\", \"backfill_info\": { \"begin\": \"0\\/\\/0\\/\\/-1\", \"end\": \"0\\/\\/0\\/\\/-1\", \"objects\": []}, \"peer_backfill_info\": [], \"backfills_in_flight\": [], \"recovering\": [], \"pg_backend\": { \"pull_from_peer\": [], \"pushing\": []}}, \"scrub\": { \"scrubber.epoch_start\": \"0\", \"scrubber.active\": 0, \"scrubber.block_writes\": 0, \"scrubber.finalizing\": 0, \"scrubber.waiting_on\": 0, \"scrubber.waiting_on_whom\": []}}, { \"name\": \"Started\", \"enter_time\": \"2021-08-28 19:30:11.044020\"}],",
"cephadm shell",
"ceph pg dump_stuck inactive ceph pg dump_stuck unclean ceph pg dump_stuck stale",
"rados list-inconsistent-pg POOL --format=json-pretty",
"rados list-inconsistent-pg data --format=json-pretty [0.6]",
"rados list-inconsistent-obj PLACEMENT_GROUP_ID",
"rados list-inconsistent-obj 0.6 { \"epoch\": 14, \"inconsistents\": [ { \"object\": { \"name\": \"image1\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"version\": 1 }, \"errors\": [ \"data_digest_mismatch\", \"size_mismatch\" ], \"union_shard_errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"selected_object_info\": \"0:602f83fe:::foo:head(16'1 client.4110.0:1 dirty|data_digest|omap_digest s 968 uv 1 dd e978e67f od ffffffff alloc_hint [0 0 0])\", \"shards\": [ { \"osd\": 0, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 1, \"errors\": [], \"size\": 968, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xe978e67f\" }, { \"osd\": 2, \"errors\": [ \"data_digest_mismatch_oi\", \"size_mismatch_oi\" ], \"size\": 0, \"omap_digest\": \"0xffffffff\", \"data_digest\": \"0xffffffff\" } ] } ] }",
"rados list-inconsistent-snapset PLACEMENT_GROUP_ID",
"rados list-inconsistent-snapset 0.23 --format=json-pretty { \"epoch\": 64, \"inconsistents\": [ { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000001\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"0x00000002\", \"headless\": true }, { \"name\": \"obj5\", \"nspace\": \"\", \"locator\": \"\", \"snap\": \"head\", \"ss_attr_missing\": true, \"extra_clones\": true, \"extra clones\": [ 2, 1 ] } ]",
"HEALTH_ERR 1 pgs inconsistent; 2 scrub errors pg 0.6 is active+clean+inconsistent, acting [0,1,2] 2 scrub errors",
"_PG_._ID_ shard _OSD_: soid _OBJECT_ digest _DIGEST_ != known digest _DIGEST_ _PG_._ID_ shard _OSD_: soid _OBJECT_ omap_digest _DIGEST_ != known omap_digest _DIGEST_",
"ceph pg repair ID",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 1 --osd_recovery_op_priority 1'",
"ceph osd set noscrub ceph osd set nodeep-scrub",
"ceph osd pool set POOL pg_num VALUE",
"ceph osd pool set data pg_num 4",
"ceph -s",
"ceph osd pool set POOL pgp_num VALUE",
"ceph osd pool set data pgp_num 4",
"ceph -s",
"ceph tell osd.* injectargs '--osd_max_backfills 1 --osd_recovery_max_active 3 --osd_recovery_op_priority 3'",
"ceph osd unset noscrub ceph osd unset nodeep-scrub",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op list",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op list",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op list OBJECT_ID",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op list default.region",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost --dry-run",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost --dry-run",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID --op fix-lost",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c --op fix-lost",
"ceph-objectstore-tool --data-path PATH_TO_OSD --op fix-lost OBJECT_ID",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --op fix-lost default.region",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-bytes > OBJECT_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-bytes > zone_info.default.backup ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-bytes > zone_info.default.working-copy",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-bytes < OBJECT_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-bytes < zone_info.default.working-copy",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT remove",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' remove",
"systemctl status ceph-osd@ OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT list-omap",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' list-omap",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omaphdr > OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-omaphdr > zone_info.default.omaphdr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omaphdr < OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-omaphdr < zone_info.default.omaphdr.txt",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-omap KEY > OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-omap \"\" > zone_info.default.omap.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-omap KEY < OBJECT_MAP_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-omap \"\" < zone_info.default.omap.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT rm-omap KEY",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' rm-omap \"\"",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT list-attrs",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' list-attrs",
"systemctl status ceph- FSID @osd. OSD_ID",
"systemctl status [email protected]",
"cephadm shell --name osd. OSD_ID",
"cephadm shell --name osd.0",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT get-attr KEY > OBJECT_ATTRS_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' get-attr \"oid\" > zone_info.default.attr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT set-attr KEY < OBJECT_ATTRS_FILE_NAME",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' set-attr \"oid\"<zone_info.default.attr.txt",
"ceph-objectstore-tool --data-path PATH_TO_OSD --pgid PG_ID OBJECT rm-attr KEY",
"ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-0 --pgid 0.1c '{\"oid\":\"zone_info.default\",\"key\":\"\",\"snapid\":-2,\"hash\":235010478,\"max\":0,\"pool\":11,\"namespace\":\"\"}' rm-attr \"oid\"",
"ceph orch apply mon --unmanaged Scheduled mon update...",
"ceph -s mon: 5 daemons, quorum host01, host02, host04, host05 (age 30s), out of quorum: host07",
"ceph mon set_new_tiebreaker NEW_HOST",
"ceph mon set_new_tiebreaker host02",
"ceph mon set_new_tiebreaker host02 Error EINVAL: mon.host02 has location DC1, which matches mons host02 on the datacenter dividing bucket for stretch mode.",
"ceph mon set_location HOST datacenter= DATACENTER",
"ceph mon set_location host02 datacenter=DC3",
"ceph orch daemon rm FAILED_TIEBREAKER_MONITOR --force",
"ceph orch daemon rm mon.host07 --force Removed mon.host07 from host 'host07'",
"ceph mon add HOST IP_ADDRESS datacenter= DATACENTER ceph orch daemon add mon HOST",
"ceph mon add host07 213.222.226.50 datacenter=DC1 ceph orch daemon add mon host07",
"ceph -s mon: 5 daemons, quorum host01, host02, host04, host05, host07 (age 15s)",
"ceph mon dump epoch 19 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d last_changed 2023-01-17T04:12:05.709475+0000 created 2023-01-16T05:47:25.631684+0000 min_mon_release 16 (pacific) election_strategy: 3 stretch_mode_enabled 1 tiebreaker_mon host02 disallowed_leaders host02 0: [v2:132.224.169.63:3300/0,v1:132.224.169.63:6789/0] mon.host02; crush_location {datacenter=DC3} 1: [v2:220.141.179.34:3300/0,v1:220.141.179.34:6789/0] mon.host04; crush_location {datacenter=DC2} 2: [v2:40.90.220.224:3300/0,v1:40.90.220.224:6789/0] mon.host01; crush_location {datacenter=DC1} 3: [v2:60.140.141.144:3300/0,v1:60.140.141.144:6789/0] mon.host07; crush_location {datacenter=DC1} 4: [v2:186.184.61.92:3300/0,v1:186.184.61.92:6789/0] mon.host03; crush_location {datacenter=DC2} dumped monmap epoch 19",
"ceph orch apply mon --placement=\" HOST_1 , HOST_2 , HOST_3 , HOST_4 , HOST_5 \"",
"ceph orch apply mon --placement=\"host01, host02, host04, host05, host07\" Scheduled mon update",
"ceph mon add NEW_HOST IP_ADDRESS datacenter= DATACENTER",
"ceph mon add host06 213.222.226.50 datacenter=DC3 adding mon.host06 at [v2:213.222.226.50:3300/0,v1:213.222.226.50:6789/0]",
"ceph orch apply mon --unmanaged Scheduled mon update...",
"ceph orch daemon add mon NEW_HOST",
"ceph orch daemon add mon host06",
"ceph -s mon: 6 daemons, quorum host01, host02, host04, host05, host06 (age 30s), out of quorum: host07",
"ceph mon set_new_tiebreaker NEW_HOST",
"ceph mon set_new_tiebreaker host06",
"ceph orch daemon rm FAILED_TIEBREAKER_MONITOR --force",
"ceph orch daemon rm mon.host07 --force Removed mon.host07 from host 'host07'",
"ceph mon dump epoch 19 fsid 1234ab78-1234-11ed-b1b1-de456ef0a89d last_changed 2023-01-17T04:12:05.709475+0000 created 2023-01-16T05:47:25.631684+0000 min_mon_release 16 (pacific) election_strategy: 3 stretch_mode_enabled 1 tiebreaker_mon host06 disallowed_leaders host06 0: [v2:213.222.226.50:3300/0,v1:213.222.226.50:6789/0] mon.host06; crush_location {datacenter=DC3} 1: [v2:220.141.179.34:3300/0,v1:220.141.179.34:6789/0] mon.host04; crush_location {datacenter=DC2} 2: [v2:40.90.220.224:3300/0,v1:40.90.220.224:6789/0] mon.host01; crush_location {datacenter=DC1} 3: [v2:60.140.141.144:3300/0,v1:60.140.141.144:6789/0] mon.host02; crush_location {datacenter=DC1} 4: [v2:186.184.61.92:3300/0,v1:186.184.61.92:6789/0] mon.host05; crush_location {datacenter=DC2} dumped monmap epoch 19",
"ceph orch apply mon --placement=\" HOST_1 , HOST_2 , HOST_3 , HOST_4 , HOST_5 \"",
"ceph orch apply mon --placement=\"host01, host02, host04, host05, host06\" Scheduled mon update...",
"ceph osd force_recovery_stretch_mode --yes-i-really-mean-it",
"ceph osd force_healthy_stretch_mode --yes-i-really-mean-it",
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config set osd.X osd_mclock_force_run_benchmark_on_init true",
"ceph config rm osd.X osd_mclock_max_capacity_iops_[hdd|ssd]",
"ceph config set osd.X osd_mclock_force_run_benchmark_on_init true",
"subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms yum --enable=rhceph-6-tools-for-rhel-9-x86_64-debug-rpms",
"ceph-base-debuginfo ceph-common-debuginfo ceph-debugsource ceph-fuse-debuginfo ceph-immutable-object-cache-debuginfo ceph-mds-debuginfo ceph-mgr-debuginfo ceph-mon-debuginfo ceph-osd-debuginfo ceph-radosgw-debuginfo cephfs-mirror-debuginfo",
"dnf install gdb",
"echo \"| /usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e\" > /proc/sys/kernel/core_pattern",
"ls -ltr /var/lib/systemd/coredump total 8232 -rw-r-----. 1 root root 8427548 Jan 22 19:24 core.ceph-osd.167.5ede29340b6c4fe4845147f847514c12.15622.1584573794000000.xz",
"ps exec -it MONITOR_ID_OR_OSD_ID bash",
"podman ps podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-osd-2 bash",
"dnf install procps-ng gdb",
"ps -aef | grep PROCESS | grep -v run",
"ps -aef | grep ceph-mon | grep -v run ceph 15390 15266 0 18:54 ? 00:00:29 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 5 ceph 18110 17985 1 19:40 ? 00:00:08 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 2",
"gcore ID",
"gcore 18110 warning: target file /proc/18110/cmdline contained unexpected null characters Saved corefile core.18110",
"ls -ltr total 709772 -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.18110",
"cp ceph-mon- MONITOR_ID :/tmp/mon.core. MONITOR_PID /tmp",
"cephadm shell",
"ceph config set mgr mgr/cephadm/allow_ptrace true",
"ceph orch redeploy SERVICE_ID",
"ceph orch redeploy mgr ceph orch redeploy rgw.rgw.1",
"exit ssh [email protected]",
"podman ps podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-rgw-rgw-1-host04 bash",
"dnf install procps-ng gdb",
"ps aux | grep rados ceph 6 0.3 2.8 5334140 109052 ? Sl May10 5:25 /usr/bin/radosgw -n client.rgw.rgw.1.host04 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug",
"gcore PID",
"gcore 6",
"ls -ltr total 108798 -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.6",
"cp ceph-mon- DAEMON_ID :/tmp/mon.core. PID /tmp"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/troubleshooting_guide/index
|
Chapter 9. Image configuration resources (Classic)
|
Chapter 9. Image configuration resources (Classic) Use the following procedure to configure image registries. 9.1. Image controller configuration parameters The image.config.openshift.io/cluster resource holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . Its spec offers the following configuration parameters. Note Parameters such as DisableScheduledImport , MaxImagesBulkImportedPerRepository , MaxScheduledImportsPerMinute , ScheduledImageImportMinimumIntervalSeconds , InternalRegistryHostname are not configurable. Parameter Description allowedRegistriesForImport Limits the container image registries from which normal users can import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. Every element of this list contains a location of the registry specified by the registry domain name. domainName : Specifies a domain name for the registry. If the registry uses a non-standard 80 or 443 port, the port should be included in the domain name as well. insecure : Insecure indicates whether the registry is secure or insecure. By default, if not otherwise specified, the registry is assumed to be secure. additionalTrustedCA A reference to a config map containing additional CAs that should be trusted during image stream import , pod image pull , openshift-image-registry pullthrough , and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM-encoded certificate as the value, for each additional registry CA to trust. externalRegistryHostnames Provides the hostnames for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The value must be in hostname[:port] format. registrySources Contains configuration that determines how the container runtime should treat individual registries when accessing images for builds and pods. For instance, whether or not to allow insecure access. It does not contain configuration for the internal cluster registry. insecureRegistries : Registries which do not have a valid TLS certificate or only support HTTP connections. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . blockedRegistries : Registries for which image pull and push actions are denied. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are allowed. allowedRegistries : Registries for which image pull and push actions are allowed. To specify all subdomains, add the asterisk ( * ) wildcard character as a prefix to the domain name. For example, *.example.com . You can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . All other registries are blocked. containerRuntimeSearchRegistries : Registries for which image pull and push actions are allowed using image short names. All other registries are blocked. Either blockedRegistries or allowedRegistries can be set, but not both. Warning When the allowedRegistries parameter is defined, all registries, including registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. When using the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The status field of the image.config.openshift.io/cluster resource holds observed values from the cluster. Parameter Description internalRegistryHostname Set by the Image Registry Operator, which controls the internalRegistryHostname . It sets the hostname for the default OpenShift image registry. The value must be in hostname[:port] format. For backward compatibility, you can still use the OPENSHIFT_DEFAULT_REGISTRY environment variable, but this setting overrides the environment variable. externalRegistryHostnames Set by the Image Registry Operator, provides the external hostnames for the image registry when it is exposed externally. The first value is used in publicDockerImageRepository field in image streams. The values must be in hostname[:port] format. 9.2. Configuring image registry settings You can configure image registry settings by editing the image.config.openshift.io/cluster custom resource (CR). When changes to the registry are applied to the image.config.openshift.io/cluster CR, the Machine Config Operator (MCO) performs the following sequential actions: Cordons the node Applies changes by restarting CRI-O Uncordons the node Note The MCO does not restart nodes when it detects changes. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Image : Holds cluster-wide information about how to handle images. The canonical, and only valid name is cluster . 2 allowedRegistriesForImport : Limits the container image registries from which normal users may import images. Set this list to the registries that you trust to contain valid images, and that you want applications to be able to import from. Users with permission to create images or ImageStreamMappings from the API are not affected by this policy. Typically only cluster administrators have the appropriate permissions. 3 additionalTrustedCA : A reference to a config map containing additional certificate authorities (CA) that are trusted during image stream import, pod image pull, openshift-image-registry pullthrough, and builds. The namespace for this config map is openshift-config . The format of the config map is to use the registry hostname as the key, and the PEM certificate as the value, for each additional registry CA to trust. 4 registrySources : Contains configuration that determines whether the container runtime allows or blocks individual registries when accessing images for builds and pods. Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. You can also define whether or not to allow access to insecure registries or registries that allow registries that use image short names. This example uses the allowedRegistries parameter, which defines the registries that are allowed to be used. The insecure registry insecure.com is also allowed. The registrySources parameter does not contain configuration for the internal cluster registry. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, you must add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. Do not add the registry.redhat.io and quay.io registries to the blockedRegistries list. When using the allowedRegistries , blockedRegistries , or insecureRegistries parameter, you can specify an individual repository within a registry. For example: reg1.io/myrepo/myapp:latest . Insecure external registries should be avoided to reduce possible security risks. To check that the changes are applied, list your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.30.3 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.30.3 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.30.3 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.30.3 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.30.3 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.30.3 9.2.1. Adding specific registries You can add a list of registries, and optionally an individual repository within a registry, that are permitted for image pull and push actions by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the allowedRegistries parameter, the container runtime searches only those registries. Registries not in the list are blocked. Warning When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an allowed list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, to use for image pull and push actions. All other registries are blocked. Note Either the allowedRegistries parameter or the blockedRegistries parameter can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, the allowed registries list is used to update the image signature policy in the /etc/containers/policy.json file on each node. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat /etc/containers/policy.json | jq '.' The following policy indicates that only images from the example.com, quay.io, and registry.redhat.io registries are permitted for image pulls and pushes: Example 9.1. Example image signature policy file { "default":[ { "type":"reject" } ], "transports":{ "atomic":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker":{ "example.com":[ { "type":"insecureAcceptAnything" } ], "image-registry.openshift-image-registry.svc:5000":[ { "type":"insecureAcceptAnything" } ], "insecure.com":[ { "type":"insecureAcceptAnything" } ], "quay.io":[ { "type":"insecureAcceptAnything" } ], "reg4.io/myrepo/myapp:latest":[ { "type":"insecureAcceptAnything" } ], "registry.redhat.io":[ { "type":"insecureAcceptAnything" } ] }, "docker-daemon":{ "":[ { "type":"insecureAcceptAnything" } ] } } } Note If your cluster uses the registrySources.insecureRegistries parameter, ensure that any insecure registries are included in the allowed list. For example: spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000 9.2.2. Blocking specific registries You can block any registry, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the blockedRegistries parameter, the container runtime does not search those registries. All other registries are allowed. Warning To prevent pod failure, do not add the registry.redhat.io and quay.io registries to the blockedRegistries list, as they are required by payload images within your environment. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with a blocked list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify registries, and optionally a repository in that registry, that should not be used for image pull and push actions. All other registries are allowed. Note Either the blockedRegistries registry or the allowedRegistries registry can be set, but not both. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, changes to the blocked registries appear in the /etc/containers/registries.conf file on each node. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat etc/containers/registries.conf The following example indicates that images from the untrusted.com registry are prevented for image pulls and pushes: Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "untrusted.com" blocked = true 9.2.2.1. Blocking a payload registry In a mirroring configuration, you can block upstream payload registries in a disconnected environment using a ImageContentSourcePolicy (ICSP) object. The following example procedure demonstrates how to block the quay.io/openshift-payload payload registry. Procedure Create the mirror configuration using an ImageContentSourcePolicy (ICSP) object to mirror the payload to a registry in your instance. The following example ICSP file mirrors the payload internal-mirror.io/openshift-payload : apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload After the object deploys onto your nodes, verify that the mirror configuration is set by checking the /etc/containers/registries.conf file: Example output [[registry]] prefix = "" location = "quay.io/openshift-payload" mirror-by-digest-only = true [[registry.mirror]] location = "internal-mirror.io/openshift-payload" Use the following command to edit the image.config.openshift.io custom resource file: USD oc edit image.config.openshift.io cluster To block the payload registry, add the following configuration to the image.config.openshift.io custom resource file: spec: registrySources: blockedRegistries: - quay.io/openshift-payload Verification Verify that the upstream payload registry is blocked by checking the /etc/containers/registries.conf file on the node. Example output [[registry]] prefix = "" location = "quay.io/openshift-payload" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = "internal-mirror.io/openshift-payload" 9.2.3. Allowing insecure registries You can add insecure registries, and optionally an individual repository within a registry, by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. Registries that do not use valid SSL certificates or do not require HTTPS connections are considered insecure. Warning Insecure external registries should be avoided to reduce possible security risks. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR with an insecure registries list: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Contains configurations that determine how the container runtime should treat individual registries when accessing images for builds and pods. It does not contain configuration for the internal cluster registry. 2 Specify an insecure registry. You can specify a repository in that registry. 3 Ensure that any insecure registries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use the parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster CR for any changes to the registries, then drains and uncordons the nodes when it detects changes. After the nodes return to the Ready state, changes to the insecure and blocked registries appear in the /etc/containers/registries.conf file on each node. Verification To check that the registries have been added to the policy file, use the following command on a node: USD cat /etc/containers/registries.conf The following example indicates that images from the insecure.com registry is insecure and is allowed for image pulls and pushes. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "insecure.com" insecure = true 9.2.4. Adding registries that allow image short names You can add registries to search for an image short name by editing the image.config.openshift.io/cluster custom resource (CR). OpenShift Container Platform applies the changes to this CR to all nodes in the cluster. An image short name enables you to search for images without including the fully qualified domain name in the pull spec. For example, you could use rhel7/etcd instead of registry.access.redhat.com/rhe7/etcd . You might use short names in situations where using the full path is not practical. For example, if your cluster references multiple internal registries whose DNS changes frequently, you would need to update the fully qualified domain names in your pull specs with each change. In this case, using an image short name might be beneficial. When pulling or pushing images, the container runtime searches the registries listed under the registrySources parameter in the image.config.openshift.io/cluster CR. If you created a list of registries under the containerRuntimeSearchRegistries parameter, when pulling an image with a short name, the container runtime searches those registries. Warning Using image short names with public registries is strongly discouraged because the image might not deploy if the public registry requires authentication. Use fully-qualified image names with public registries. Red Hat internal or private registries typically support the use of image short names. If you list public registries under the containerRuntimeSearchRegistries parameter (including the registry.redhat.io , docker.io , and quay.io registries), you expose your credentials to all the registries on the list, and you risk network and registry attacks. Because you can only have one pull secret for pulling images, as defined by the global pull secret, that secret is used to authenticate against every registry in that list. Therefore, if you include public registries in the list, you introduce a security risk. You cannot list multiple public registries under the containerRuntimeSearchRegistries parameter if each public registry requires different credentials and a cluster does not list the public registry in the global pull secret. For a public registry that requires authentication, you can use an image short name only if the registry has its credentials stored in the global pull secret. The Machine Config Operator (MCO) watches the image.config.openshift.io/cluster resource for any changes to the registries. When the MCO detects a change, it drains the nodes, applies the change, and uncordons the nodes. After the nodes return to the Ready state, if the containerRuntimeSearchRegistries parameter is added, the MCO creates a file in the /etc/containers/registries.conf.d directory on each node with the listed registries. The file overrides the default list of unqualified search registries in the /etc/containers/registries.conf file. There is no way to fall back to the default list of unqualified search registries. The containerRuntimeSearchRegistries parameter works only with the Podman and CRI-O container engines. The registries in the list can be used only in pod specs, not in builds and image streams. Procedure Edit the image.config.openshift.io/cluster custom resource: USD oc edit image.config.openshift.io/cluster The following is an example image.config.openshift.io/cluster CR: apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-05-17T13:44:26Z" generation: 1 name: cluster resourceVersion: "8302" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 ... status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000 1 Specify registries to use with image short names. You should use image short names with only internal or private registries to reduce possible security risks. 2 Ensure that any registries listed under containerRuntimeSearchRegistries are included in the allowedRegistries list. Note When the allowedRegistries parameter is defined, all registries, including the registry.redhat.io and quay.io registries and the default OpenShift image registry, are blocked unless explicitly listed. If you use this parameter, to prevent pod failure, add all registries including the registry.redhat.io and quay.io registries and the internalRegistryHostname to the allowedRegistries list, as they are required by payload images within your environment. For disconnected clusters, mirror registries should also be added. Verification Enter the following command to obtain a list of your nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b Run the following command to enter debug mode on the node: USD oc debug node/<node_name> When prompted, enter chroot /host into the terminal: sh-4.4# chroot /host Enter the following command to check that the registries have been added to the policy file: sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf Example output unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io'] 9.2.5. Configuring additional trust stores for image registry access The image.config.openshift.io/cluster custom resource can contain a reference to a config map that contains additional certificate authorities to be trusted during image registry access. Prerequisites The certificate authorities (CA) must be PEM-encoded. Procedure You can create a config map in the openshift-config namespace and use its name in AdditionalTrustedCA in the image.config.openshift.io custom resource to provide additional CAs that should be trusted when contacting external registries. The config map key is the hostname of a registry with the port for which this CA is to be trusted, and the PEM certificate content is the value, for each additional registry CA to trust. Image registry CA config map example apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . You can configure additional CAs with the following procedure. To configure an additional CA: USD oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config USD oc edit image.config.openshift.io cluster spec: additionalTrustedCA: name: registry-config 9.3. Understanding image registry repository mirroring Setting up container registry repository mirroring enables you to perform the following tasks: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. Repository mirroring in OpenShift Container Platform includes the following attributes: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a data center that is in a disconnected environment. After OpenShift Container Platform installation: If you did not configure mirroring during OpenShift Container Platform installation, you can do so postinstallation by using any of the following custom resource (CR) objects: ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored registry by using digest specifications. The IDMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageContentSourcePolicy (ICSP). This object allows you to pull images from a mirrored registry by using digest specifications. The ICSP CR always falls back to the source registry if the mirrors do not work. Important Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. For more information, see "Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring" in the following section. Each of these custom resource objects identify the following information: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. For new clusters, you can use IDMS, ITMS, and ICSP CRs objects as desired. However, using IDMS and ITMS is recommended. If you upgraded a cluster, any existing ICSP objects remain stable, and both IDMS and ICSP objects are supported. Workloads using ICSP objects continue to function as expected. However, if you want to take advantage of the fallback policies introduced in the IDMS CRs, you can migrate current workloads to IDMS objects by using the oc adm migrate icsp command as shown in the Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring section that follows. Migrating to IDMS objects does not require a cluster reboot. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Additional resources For more information about global pull secrets, see Updating the global cluster pull secret . 9.3.1. Configuring image registry repository mirroring You can create postinstallation mirror configuration custom resources (CR) to redirect image pull requests from a source image registry to a mirrored image registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source repository to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy --all \ docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi9/ubi-minimal image from registry.access.redhat.com . After you create the mirrored registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Create a postinstallation mirror configuration CR, by using one of the following examples: Create an ImageDigestMirrorSet or ImageTagMirrorSet CR, as needed, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource 1 Indicates the API to use with this CR. This must be config.openshift.io/v1 . 2 Indicates the kind of object according to the pull type: ImageDigestMirrorSet : Pulls a digest reference image. ImageTagMirrorSet : Pulls a tag reference image. 3 Indicates the type of image pull method, either: imageDigestMirrors : Use for an ImageDigestMirrorSet CR. imageTagMirrors : Use for an ImageTagMirrorSet CR. 4 Indicates the name of the mirrored image registry and repository. 5 Optional: Indicates a secondary mirror repository for each target repository. If one mirror is down, the target repository can use the secondary mirror. 6 Indicates the registry and repository source, which is the repository that is referred to in an image pull specification. 7 Optional: Indicates the fallback policy if the image pull fails: AllowContactingSource : Allows continued attempts to pull the image from the source repository. This is the default. NeverContactSource : Prevents continued attempts to pull the image from the source repository. 8 Optional: Indicates a namespace inside a registry, which allows you to use any image in that namespace. If you use a registry domain as a source, the object is applied to all repositories from the registry. 9 Optional: Indicates a registry, which allows you to use any image in that registry. If you specify a registry name, the object is applied to all repositories from a source registry to a mirror registry. 10 Pulls the image registry.example.com/example/myimage@sha256:... from the mirror mirror.example.net/image@sha256:.. . 11 Pulls the image registry.example.com/example/image@sha256:... in the source registry namespace from the mirror mirror.example.net/image@sha256:... . 12 Pulls the image registry.example.com/myimage@sha256 from the mirror registry example.net/registry-example-com/myimage@sha256:... . Create an ImageContentSourcePolicy custom resource, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the new object: USD oc create -f registryrepomirror.yaml After the object is created, the Machine Config Operator (MCO) drains the nodes for ImageTagMirrorSet objects only. The MCO does not drain the nodes for ImageDigestMirrorSet and ImageContentSourcePolicy objects. To check that the mirrored configuration settings are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.30.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.30.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.30.3 ip-10-0-147-35.ec2.internal Ready worker 7m v1.30.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.30.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.30.3 Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf The following output represents a registries.conf file where postinstallation mirror configuration CRs were applied. The final two entries are marked digest-only and tag-only respectively. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" 1 [[registry.mirror]] location = "example.io/example/ubi-minimal" 2 pull-from-mirror = "digest-only" 3 [[registry.mirror]] location = "example.com/example/ubi-minimal" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.net/registry-example-com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example" [[registry.mirror]] location = "mirror.example.net" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example/myimage" [[registry.mirror]] location = "mirror.example.net/image" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/redhat" [[registry.mirror]] location = "mirror.example.com/redhat" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" blocked = true 4 [[registry.mirror]] location = "example.io/example/ubi-minimal-tag" pull-from-mirror = "tag-only" 5 1 Indicates the repository that is referred to in a pull spec. 2 Indicates the mirror for that repository. 3 Indicates that the image pull from the mirror is a digest reference image. 4 Indicates that the NeverContactSource parameter is set for this repository. 5 Indicates that the image pull from the mirror is a tag reference image. Pull an image to the node from the source and check if it is resolved by the mirror. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf... Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 9.3.2. Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. This functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. ICSP objects are being replaced by ImageDigestMirrorSet and ImageTagMirrorSet objects to configure repository mirroring. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. The command updates the API to the current version, changes the kind value to ImageDigestMirrorSet , and changes spec.repositoryDigestMirrors to spec.imageDigestMirrors . The rest of the file is not changed. Because the migration does not change the registries.conf file, the cluster does not need to reboot. For more information about ImageDigestMirrorSet or ImageTagMirrorSet objects, see "Configuring image registry repository mirroring" in the section. Prerequisites Access to the cluster as a user with the cluster-admin role. Ensure that you have ImageContentSourcePolicy objects on your cluster. Procedure Use the following command to convert one or more ImageContentSourcePolicy YAML files to an ImageDigestMirrorSet YAML file: USD oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory> where: <file_name> Specifies the name of the source ImageContentSourcePolicy YAML. You can list multiple file names. --dest-dir Optional: Specifies a directory for the output ImageDigestMirrorSet YAML. If unset, the file is written to the current directory. For example, the following command converts the icsp.yaml and icsp-2.yaml file and saves the new YAML files to the idms-files directory. USD oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files Example output wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml Create the CR object by running the following command: USD oc create -f <path_to_the_directory>/<file-name>.yaml where: <path_to_the_directory> Specifies the path to the directory, if you used the --dest-dir flag. <file_name> Specifies the name of the ImageDigestMirrorSet YAML. Remove the ICSP objects after the IDMS objects are rolled out.
|
[
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.30.3 ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.30.3 ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.30.3 ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.30.3 ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.30.3 ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.30.3",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat /etc/containers/policy.json | jq '.'",
"{ \"default\":[ { \"type\":\"reject\" } ], \"transports\":{ \"atomic\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker\":{ \"example.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"image-registry.openshift-image-registry.svc:5000\":[ { \"type\":\"insecureAcceptAnything\" } ], \"insecure.com\":[ { \"type\":\"insecureAcceptAnything\" } ], \"quay.io\":[ { \"type\":\"insecureAcceptAnything\" } ], \"reg4.io/myrepo/myapp:latest\":[ { \"type\":\"insecureAcceptAnything\" } ], \"registry.redhat.io\":[ { \"type\":\"insecureAcceptAnything\" } ] }, \"docker-daemon\":{ \"\":[ { \"type\":\"insecureAcceptAnything\" } ] } } }",
"spec: registrySources: insecureRegistries: - insecure.com allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com - image-registry.openshift-image-registry.svc:5000",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 blockedRegistries: 2 - untrusted.com - reg1.io/myrepo/myapp:latest status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"untrusted.com\" blocked = true",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: my-icsp spec: repositoryDigestMirrors: - mirrors: - internal-mirror.io/openshift-payload source: quay.io/openshift-payload",
"[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"",
"oc edit image.config.openshift.io cluster",
"spec: registrySources: blockedRegistries: - quay.io/openshift-payload",
"[[registry]] prefix = \"\" location = \"quay.io/openshift-payload\" blocked = true mirror-by-digest-only = true [[registry.mirror]] location = \"internal-mirror.io/openshift-payload\"",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: registrySources: 1 insecureRegistries: 2 - insecure.com - reg4.io/myrepo/myapp:latest allowedRegistries: - example.com - quay.io - registry.redhat.io - insecure.com 3 - reg4.io/myrepo/myapp:latest - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"insecure.com\" insecure = true",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: - domainName: quay.io insecure: false additionalTrustedCA: name: myconfigmap registrySources: containerRuntimeSearchRegistries: 1 - reg1.io - reg2.io - reg3.io allowedRegistries: 2 - example.com - quay.io - registry.redhat.io - reg1.io - reg2.io - reg3.io - image-registry.openshift-image-registry.svc:5000 status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION <node_name> Ready control-plane,master 37m v1.27.8+4fab27b",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-5.1# cat /etc/containers/registries.conf.d/01-image-searchRegistries.conf",
"unqualified-search-registries = ['reg1.io', 'reg2.io', 'reg3.io']",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal",
"apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.30.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.30.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.30.3 ip-10-0-147-35.ec2.internal Ready worker 7m v1.30.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.30.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.30.3",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/redhat\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf",
"oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>",
"oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files",
"wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml",
"oc create -f <path_to_the_directory>/<file-name>.yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/images/image-configuration
|
Chapter 2. Installation
|
Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.4 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 subscriptions listed at https://access.redhat.com/solutions/472793 . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using and Configuring Red Hat Subscription Manager . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional channel, which are listed in Section 2.1.2, "Packages from the Optional Channel" , cannot be installed from the ISO image. Note Packages that require the Optional channel cannot be installed from the ISO image. A list of packages that require enabling of the Optional channel is provided in Section 2.1.2, "Packages from the Optional Channel" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using and Configuring Red Hat Subscription Manager . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Channel Some of the Red Hat Software Collections packages require the Optional channel to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this channel, see the relevant Knowledgebase article at https://access.redhat.com/solutions/392003 . Packages from Software Collections for Red Hat Enterprise Linux that require the Optional channel to be enabled are listed in the tables below. Note that packages from the Optional channel are unsupported. For details, see the Knowledgebase article at https://access.redhat.com/articles/1150793 . Table 2.1. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Channel devtoolset-8-build scl-utils-build devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-gcc-plugin-devel libmpc-devel devtoolset-9-build scl-utils-build devtoolset-9-dyninst-testsuite glibc-static devtoolset-9-gcc-plugin-devel libmpc-devel devtoolset-9-gdb source-highlight httpd24-mod_ldap apr-util-ldap httpd24-mod_session apr-util-openssl python27-python-debug tix python27-python-devel scl-utils-build python27-tkinter tix rh-git218-git-cvs cvsps rh-git218-git-svn perl-Git-SVN, subversion rh-git218-perl-Git-SVN subversion-perl rh-java-common-ant-apache-bsf rhino rh-java-common-batik rhino rh-maven35-xpp3-javadoc java-1.7.0-openjdk-javadoc, java-1.8.0-openjdk-javadoc, java-1.8.0-openjdk-javadoc-zip, java-11-openjdk-javadoc, java-11-openjdk-javadoc-zip rh-php72-php-pspell aspell rh-php73-php-devel pcre2-devel rh-php73-php-pspell aspell rh-python36-python-devel scl-utils-build rh-python36-python-sphinx texlive-framed, texlive-threeparttable, texlive-titlesec, texlive-wrapfig Table 2.2. Packages That Require Enabling of the Optional Channel in Red Hat Enterprise Linux 6 Package from a Software Collection Required Package from the Optional Channel devtoolset-8-dyninst-testsuite glibc-static devtoolset-8-elfutils-devel xz-devel devtoolset-8-gcc-plugin-devel gmp-devel, mpfr-devel devtoolset-8-libatomic-devel libatomic devtoolset-8-libgccjit mpfr python27-python-devel scl-utils-build rh-mariadb102-boost-devel libicu-devel rh-mariadb102-mariadb-bench perl-GD rh-mongodb34-boost-devel libicu-devel rh-perl524-perl-devel gdbm-devel, systemtap-sdt-devel rh-python36-python-devel scl-utils-build 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.4 requires the removal of any earlier pre-release versions, including Beta releases. If you have installed any version of Red Hat Software Collections 3.4, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections.> The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections 3.4 Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install php54 and rh-mariadb100 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl526-perl-CPAN and rh-perl526-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby25-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see https://access.redhat.com/solutions/9907 . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide .
|
[
"rhel- variant -rhscl-6-rpms rhel- variant -rhscl-6-debug-rpms rhel- variant -rhscl-6-source-rpms rhel-server-rhscl-6-eus-rpms rhel-server-rhscl-6-eus-source-rpms rhel-server-rhscl-6-eus-debug-rpms rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms rhel-server-rhscl-7-eus-rpms rhel-server-rhscl-7-eus-source-rpms rhel-server-rhscl-7-eus-debug-rpms>",
"~]# yum install rh-php72 rh-mariadb102",
"~]# yum install rh-perl526-perl-CPAN rh-perl526-perl-Archive-Tar",
"~]# debuginfo-install rh-ruby25-ruby"
] |
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.4_release_notes/chap-Installation
|
Chapter 2. Customizing and managing Red Hat Ceph Storage
|
Chapter 2. Customizing and managing Red Hat Ceph Storage Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 supports Red Hat Ceph Storage 7. For information on the customization and management of Red Hat Ceph Storage 7, refer to the Red Hat Ceph Storage documentation . The following guides contain key information and procedures for these tasks: Administration Guide Configuration Guide Operations Guide Data Security and Hardening Guide Dashboard Guide Troubleshooting Guide
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/customizing_persistent_storage/con_ceph-customization-management_osp
|
Chapter 10. The OptaPlanner Score interface
|
Chapter 10. The OptaPlanner Score interface A score is represented by the Score interface, which extends the Comparable interface: public interface Score<...> extends Comparable<...> { ... } The score implementation to use depends on your use case. Your score might not efficiently fit in a single long value. OptaPlanner has several built-in score implementations, but you can implement a custom score as well. Most use cases use the built-in HardSoftScore score. All Score implementations also have an initScore (which is an int ). It is mostly intended for internal use in OptaPlanner: it is the negative number of uninitialized planning variables. From a user's perspective, this is 0 , unless a construction heuristic is terminated before it could initialize all planning variables. In this case, Score.isSolutionInitialized() returns false . The score implementation (for example HardSoftScore ) must be the same throughout a solver runtime. The score implementation is configured in the solution domain class: @PlanningSolution public class CloudBalance { ... @PlanningScore private HardSoftScore score; } 10.1. Floating point numbers in score calculation Avoid the use of the floating point number types float or double in score calculation. Use BigDecimal or scaled long instead. Floating point numbers cannot represent a decimal number correctly. For example, a double cannot contain the value 0.05 correctly. Instead, it contains the nearest representable value. Arithmetic, including addition and subtraction, that uses floating point numbers, especially for planning problems, leads to incorrect decisions as shown in the following illustration: Additionally, floating point number addition is not associative: System.out.println( ((0.01 + 0.02) + 0.03) == (0.01 + (0.02 + 0.03)) ); // returns false This leads to score corruption . Decimal numbers ( BigDecimal ) have none of these problems. Note BigDecimal arithmetic is considerably slower than int , long , or double arithmetic. In some experiments, the score calculation takes five times longer. Therefore, in many cases, it can be worthwhile to multiply all numbers for a single score weight by a plural of ten, so the score weight fits in a scaled int or long . For example, if you multiply all weights by 1000 , a fuelCost of 0.07 becomes a fuelCostMillis of 70 and no longer uses a decimal score weight. 10.2. Score calculation types There are several types of ways to calculate the score of a solution: Easy Java score calculation : Implement all constraints together in a single method in Java or another JVM language. This method does not scale. Constraint streams score calculation : Implement each constraint as a separate constraint stream in Java or another JVM language. This method is fast and scalable. Incremental Java score calculation (not recommended): Implement multiple low-level methods in Java or another JVM language. This method is fast and scalable but very difficult to implement and maintain. Drools score calculation (deprecated) : Implement each constraint as a separate score rule in DRL. This method is scalable. Each score calculation type can work with any score definition, for example HardSoftScore or HardMediumSoftScore . All score calculation types are object oriented and can reuse existing Java code. Important The score calculation must be read-only. It must not change the planning entities or the problem facts in any way. For example, the score calculation must not call a setter method on a planning entity in the score calculation. OptaPlanner does not recalculate the score of a solution if it can predict it unless an environmentMode assertion is enabled. For example, after a winning step is done, there is no need to calculate the score because that move was done and undone earlier. As a result, there is no guarantee that changes applied during score calculation actually happen. To update planning entities when the planning variable changes, use shadow variables instead. 10.2.1. Implenting the Easy Java score calculation type The Easy Java score calculation type provides an easy way to implement your score calculation in Java. You can implement all constraints together in a single method in Java or another JVM language. Advantages: Uses plain old Java so there is no learning curve Provides an opportunity to delegate score calculation to an existing code base or legacy system Disadvantages: Slowest calculation type Does not scale because there is no incremental score calculation Procedure Implement the EasyScoreCalculator interface: public interface EasyScoreCalculator<Solution_, Score_ extends Score<Score_>> { Score_ calculateScore(Solution_ solution); } The following example implements this interface in the N Queens problem: public class NQueensEasyScoreCalculator implements EasyScoreCalculator<NQueens, SimpleScore> { @Override public SimpleScore calculateScore(NQueens nQueens) { int n = nQueens.getN(); List<Queen> queenList = nQueens.getQueenList(); int score = 0; for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { Queen leftQueen = queenList.get(i); Queen rightQueen = queenList.get(j); if (leftQueen.getRow() != null && rightQueen.getRow() != null) { if (leftQueen.getRowIndex() == rightQueen.getRowIndex()) { score--; } if (leftQueen.getAscendingDiagonalIndex() == rightQueen.getAscendingDiagonalIndex()) { score--; } if (leftQueen.getDescendingDiagonalIndex() == rightQueen.getDescendingDiagonalIndex()) { score--; } } } } return SimpleScore.valueOf(score); } } Configure the EasyScoreCalculator class in the solver configuration. The following example shows how to implement this interface in the N Queens problem: <scoreDirectorFactory> <easyScoreCalculatorClass>org.optaplanner.examples.nqueens.optional.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass> </scoreDirectorFactory> To configure values of the EasyScoreCalculator method dynamically in the solver configuration so that the benchmarker can tweak those parameters, add the easyScoreCalculatorCustomProperties element and use custom properties: <scoreDirectorFactory> <easyScoreCalculatorClass>...MyEasyScoreCalculator</easyScoreCalculatorClass> <easyScoreCalculatorCustomProperties> <property name="myCacheSize" value="1000" /> </easyScoreCalculatorCustomProperties> </scoreDirectorFactory> 10.2.2. Implementing the Incremental Java score calculation type The Incremental Java score calculation type provides a way to implement your score calculation incrementally in Java. Note This type is not recommended. Advantages: Very fast and scalable. This is currently the fastest type if implemented correctly. Disadvantages: Hard to write. A scalable implementation that heavily uses maps, indexes, and so forth. You have to learn, design, write, and improve all of these performance optimizations yourself. Hard to read. Regular score constraint changes can lead to a high maintenance cost. Procedure Implement all of the methods of the IncrementalScoreCalculator interface: public interface IncrementalScoreCalculator<Solution_, Score_ extends Score<Score_>> { void resetWorkingSolution(Solution_ workingSolution); void beforeEntityAdded(Object entity); void afterEntityAdded(Object entity); void beforeVariableChanged(Object entity, String variableName); void afterVariableChanged(Object entity, String variableName); void beforeEntityRemoved(Object entity); void afterEntityRemoved(Object entity); Score_ calculateScore(); } The following example implements this interface in the N Queens problem: public class NQueensAdvancedIncrementalScoreCalculator implements IncrementalScoreCalculator<NQueens, SimpleScore> { private Map<Integer, List<Queen>> rowIndexMap; private Map<Integer, List<Queen>> ascendingDiagonalIndexMap; private Map<Integer, List<Queen>> descendingDiagonalIndexMap; private int score; public void resetWorkingSolution(NQueens nQueens) { int n = nQueens.getN(); rowIndexMap = new HashMap<Integer, List<Queen>>(n); ascendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2); descendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2); for (int i = 0; i < n; i++) { rowIndexMap.put(i, new ArrayList<Queen>(n)); ascendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n)); descendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n)); if (i != 0) { ascendingDiagonalIndexMap.put(n - 1 + i, new ArrayList<Queen>(n)); descendingDiagonalIndexMap.put((-i), new ArrayList<Queen>(n)); } } score = 0; for (Queen queen : nQueens.getQueenList()) { insert(queen); } } public void beforeEntityAdded(Object entity) { // Do nothing } public void afterEntityAdded(Object entity) { insert((Queen) entity); } public void beforeVariableChanged(Object entity, String variableName) { retract((Queen) entity); } public void afterVariableChanged(Object entity, String variableName) { insert((Queen) entity); } public void beforeEntityRemoved(Object entity) { retract((Queen) entity); } public void afterEntityRemoved(Object entity) { // Do nothing } private void insert(Queen queen) { Row row = queen.getRow(); if (row != null) { int rowIndex = queen.getRowIndex(); List<Queen> rowIndexList = rowIndexMap.get(rowIndex); score -= rowIndexList.size(); rowIndexList.add(queen); List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex()); score -= ascendingDiagonalIndexList.size(); ascendingDiagonalIndexList.add(queen); List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex()); score -= descendingDiagonalIndexList.size(); descendingDiagonalIndexList.add(queen); } } private void retract(Queen queen) { Row row = queen.getRow(); if (row != null) { List<Queen> rowIndexList = rowIndexMap.get(queen.getRowIndex()); rowIndexList.remove(queen); score += rowIndexList.size(); List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex()); ascendingDiagonalIndexList.remove(queen); score += ascendingDiagonalIndexList.size(); List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex()); descendingDiagonalIndexList.remove(queen); score += descendingDiagonalIndexList.size(); } } public SimpleScore calculateScore() { return SimpleScore.valueOf(score); } } Configure the incrementalScoreCalculatorClass class in the solver configuration. The following example shows how to implement this interface in the N Queens problem: <scoreDirectorFactory> <incrementalScoreCalculatorClass>org.optaplanner.examples.nqueens.optional.score.NQueensAdvancedIncrementalScoreCalculator</incrementalScoreCalculatorClass> </scoreDirectorFactory> Important A piece of incremental score calculator code can be difficult to write and to review. Assert its correctness by using an EasyScoreCalculator to fulfill the assertions triggered by the environmentMode . To configure values of an IncrementalScoreCalculator dynamically in the solver configuration so the benchmarker can tweak those parameters, add the incrementalScoreCalculatorCustomProperties element and use custom properties: <scoreDirectorFactory> <incrementalScoreCalculatorClass>...MyIncrementalScoreCalculator</incrementalScoreCalculatorClass> <incrementalScoreCalculatorCustomProperties> <property name="myCacheSize" value="1000"/> </incrementalScoreCalculatorCustomProperties> </scoreDirectorFactory> Optional: Implement the ConstraintMatchAwareIncrementalScoreCalculator interface to facilitate the following goals: Explain a score by splitting it up for each score constraint with ScoreExplanation.getConstraintMatchTotalMap() . Visualize or sort planning entities by how many constraints each one breaks with ScoreExplanation.getIndictmentMap() . Receive a detailed analysis if the IncrementalScoreCalculator is corrupted in FAST_ASSERT or FULL_ASSERT environmentMode . public interface ConstraintMatchAwareIncrementalScoreCalculator<Solution_, Score_ extends Score<Score_>> { void resetWorkingSolution(Solution_ workingSolution, boolean constraintMatchEnabled); Collection<ConstraintMatchTotal<Score_>> getConstraintMatchTotals(); Map<Object, Indictment<Score_>> getIndictmentMap(); } For example, in machine reassignment create one ConstraintMatchTotal for each constraint type and call addConstraintMatch() for each constraint match: public class MachineReassignmentIncrementalScoreCalculator implements ConstraintMatchAwareIncrementalScoreCalculator<MachineReassignment, HardSoftLongScore> { ... @Override public void resetWorkingSolution(MachineReassignment workingSolution, boolean constraintMatchEnabled) { resetWorkingSolution(workingSolution); // ignore constraintMatchEnabled, it is always presumed enabled } @Override public Collection<ConstraintMatchTotal<HardSoftLongScore>> getConstraintMatchTotals() { ConstraintMatchTotal<HardSoftLongScore> maximumCapacityMatchTotal = new DefaultConstraintMatchTotal<>(CONSTRAINT_PACKAGE, "maximumCapacity", HardSoftLongScore.ZERO); ... for (MrMachineScorePart machineScorePart : machineScorePartMap.values()) { for (MrMachineCapacityScorePart machineCapacityScorePart : machineScorePart.machineCapacityScorePartList) { if (machineCapacityScorePart.maximumAvailable < 0L) { maximumCapacityMatchTotal.addConstraintMatch( Arrays.asList(machineCapacityScorePart.machineCapacity), HardSoftLongScore.valueOf(machineCapacityScorePart.maximumAvailable, 0)); } } } ... List<ConstraintMatchTotal<HardSoftLongScore>> constraintMatchTotalList = new ArrayList<>(4); constraintMatchTotalList.add(maximumCapacityMatchTotal); ... return constraintMatchTotalList; } @Override public Map<Object, Indictment<HardSoftLongScore>> getIndictmentMap() { return null; // Calculate it non-incrementally from getConstraintMatchTotals() } } The getConstraintMatchTotals() code often duplicates some of the logic of the normal IncrementalScoreCalculator methods. Constraint Streams and Drools Score Calculation do not have this disadvantage because they are constraint-match aware automatically when needed without any extra domain-specific code.
|
[
"public interface Score<...> extends Comparable<...> { }",
"@PlanningSolution public class CloudBalance { @PlanningScore private HardSoftScore score; }",
"System.out.println( ((0.01 + 0.02) + 0.03) == (0.01 + (0.02 + 0.03)) ); // returns false",
"public interface EasyScoreCalculator<Solution_, Score_ extends Score<Score_>> { Score_ calculateScore(Solution_ solution); }",
"public class NQueensEasyScoreCalculator implements EasyScoreCalculator<NQueens, SimpleScore> { @Override public SimpleScore calculateScore(NQueens nQueens) { int n = nQueens.getN(); List<Queen> queenList = nQueens.getQueenList(); int score = 0; for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { Queen leftQueen = queenList.get(i); Queen rightQueen = queenList.get(j); if (leftQueen.getRow() != null && rightQueen.getRow() != null) { if (leftQueen.getRowIndex() == rightQueen.getRowIndex()) { score--; } if (leftQueen.getAscendingDiagonalIndex() == rightQueen.getAscendingDiagonalIndex()) { score--; } if (leftQueen.getDescendingDiagonalIndex() == rightQueen.getDescendingDiagonalIndex()) { score--; } } } } return SimpleScore.valueOf(score); } }",
"<scoreDirectorFactory> <easyScoreCalculatorClass>org.optaplanner.examples.nqueens.optional.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass> </scoreDirectorFactory>",
"<scoreDirectorFactory> <easyScoreCalculatorClass>...MyEasyScoreCalculator</easyScoreCalculatorClass> <easyScoreCalculatorCustomProperties> <property name=\"myCacheSize\" value=\"1000\" /> </easyScoreCalculatorCustomProperties> </scoreDirectorFactory>",
"public interface IncrementalScoreCalculator<Solution_, Score_ extends Score<Score_>> { void resetWorkingSolution(Solution_ workingSolution); void beforeEntityAdded(Object entity); void afterEntityAdded(Object entity); void beforeVariableChanged(Object entity, String variableName); void afterVariableChanged(Object entity, String variableName); void beforeEntityRemoved(Object entity); void afterEntityRemoved(Object entity); Score_ calculateScore(); }",
"public class NQueensAdvancedIncrementalScoreCalculator implements IncrementalScoreCalculator<NQueens, SimpleScore> { private Map<Integer, List<Queen>> rowIndexMap; private Map<Integer, List<Queen>> ascendingDiagonalIndexMap; private Map<Integer, List<Queen>> descendingDiagonalIndexMap; private int score; public void resetWorkingSolution(NQueens nQueens) { int n = nQueens.getN(); rowIndexMap = new HashMap<Integer, List<Queen>>(n); ascendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2); descendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2); for (int i = 0; i < n; i++) { rowIndexMap.put(i, new ArrayList<Queen>(n)); ascendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n)); descendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n)); if (i != 0) { ascendingDiagonalIndexMap.put(n - 1 + i, new ArrayList<Queen>(n)); descendingDiagonalIndexMap.put((-i), new ArrayList<Queen>(n)); } } score = 0; for (Queen queen : nQueens.getQueenList()) { insert(queen); } } public void beforeEntityAdded(Object entity) { // Do nothing } public void afterEntityAdded(Object entity) { insert((Queen) entity); } public void beforeVariableChanged(Object entity, String variableName) { retract((Queen) entity); } public void afterVariableChanged(Object entity, String variableName) { insert((Queen) entity); } public void beforeEntityRemoved(Object entity) { retract((Queen) entity); } public void afterEntityRemoved(Object entity) { // Do nothing } private void insert(Queen queen) { Row row = queen.getRow(); if (row != null) { int rowIndex = queen.getRowIndex(); List<Queen> rowIndexList = rowIndexMap.get(rowIndex); score -= rowIndexList.size(); rowIndexList.add(queen); List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex()); score -= ascendingDiagonalIndexList.size(); ascendingDiagonalIndexList.add(queen); List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex()); score -= descendingDiagonalIndexList.size(); descendingDiagonalIndexList.add(queen); } } private void retract(Queen queen) { Row row = queen.getRow(); if (row != null) { List<Queen> rowIndexList = rowIndexMap.get(queen.getRowIndex()); rowIndexList.remove(queen); score += rowIndexList.size(); List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex()); ascendingDiagonalIndexList.remove(queen); score += ascendingDiagonalIndexList.size(); List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex()); descendingDiagonalIndexList.remove(queen); score += descendingDiagonalIndexList.size(); } } public SimpleScore calculateScore() { return SimpleScore.valueOf(score); } }",
"<scoreDirectorFactory> <incrementalScoreCalculatorClass>org.optaplanner.examples.nqueens.optional.score.NQueensAdvancedIncrementalScoreCalculator</incrementalScoreCalculatorClass> </scoreDirectorFactory>",
"<scoreDirectorFactory> <incrementalScoreCalculatorClass>...MyIncrementalScoreCalculator</incrementalScoreCalculatorClass> <incrementalScoreCalculatorCustomProperties> <property name=\"myCacheSize\" value=\"1000\"/> </incrementalScoreCalculatorCustomProperties> </scoreDirectorFactory>",
"public interface ConstraintMatchAwareIncrementalScoreCalculator<Solution_, Score_ extends Score<Score_>> { void resetWorkingSolution(Solution_ workingSolution, boolean constraintMatchEnabled); Collection<ConstraintMatchTotal<Score_>> getConstraintMatchTotals(); Map<Object, Indictment<Score_>> getIndictmentMap(); }",
"public class MachineReassignmentIncrementalScoreCalculator implements ConstraintMatchAwareIncrementalScoreCalculator<MachineReassignment, HardSoftLongScore> { @Override public void resetWorkingSolution(MachineReassignment workingSolution, boolean constraintMatchEnabled) { resetWorkingSolution(workingSolution); // ignore constraintMatchEnabled, it is always presumed enabled } @Override public Collection<ConstraintMatchTotal<HardSoftLongScore>> getConstraintMatchTotals() { ConstraintMatchTotal<HardSoftLongScore> maximumCapacityMatchTotal = new DefaultConstraintMatchTotal<>(CONSTRAINT_PACKAGE, \"maximumCapacity\", HardSoftLongScore.ZERO); for (MrMachineScorePart machineScorePart : machineScorePartMap.values()) { for (MrMachineCapacityScorePart machineCapacityScorePart : machineScorePart.machineCapacityScorePartList) { if (machineCapacityScorePart.maximumAvailable < 0L) { maximumCapacityMatchTotal.addConstraintMatch( Arrays.asList(machineCapacityScorePart.machineCapacity), HardSoftLongScore.valueOf(machineCapacityScorePart.maximumAvailable, 0)); } } } List<ConstraintMatchTotal<HardSoftLongScore>> constraintMatchTotalList = new ArrayList<>(4); constraintMatchTotalList.add(maximumCapacityMatchTotal); return constraintMatchTotalList; } @Override public Map<Object, Indictment<HardSoftLongScore>> getIndictmentMap() { return null; // Calculate it non-incrementally from getConstraintMatchTotals() } }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/score-interface-con_score-calculation
|
Chapter 7. Accessing third-party UIs
|
Chapter 7. Accessing third-party UIs Integrated Metrics, Alerting, and Dashboard UIs are provided in the OpenShift Container Platform web console. See the following for details on using these integrated UIs: Managing metrics Managing alerts Reviewing monitoring dashboards OpenShift Container Platform also provides access to the Prometheus, Alertmanager, and Grafana third-party interfaces. Note Default access to the third-party monitoring interfaces might be removed in future OpenShift Container Platform releases. Following this, you will need to use port-forwarding to access them. Note The Grafana instance that is provided with the OpenShift Container Platform monitoring stack, along with its dashboards, is read-only. Note The Grafana dashboard includes Kubernetes and cluster-monitoring metrics only. Additional platform components are included in Monitoring Dashboards in the OpenShift Container Platform web console. 7.1. Accessing third-party monitoring UIs by using the web console You can access the Alertmanager, Grafana, Prometheus, and Thanos Querier web UIs through the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Networking Routes . Note Access to the third-party Alertmanager, Grafana, Prometheus, and Thanos Querier UIs is not available from the Developer perspective. Instead, use the Metrics UI link in the Developer perspective, which includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. Select the openshift-monitoring project in the Project list. Access a third-party monitoring UI: Select the URL in the alertmanager-main row to open the login page for the Alertmanager UI. Select the URL in the grafana row to open the login page for the Grafana UI. Select the URL in the prometheus-k8s row to open the login page for the Prometheus UI. Select the URL in the thanos-querier row to open the login page for the Thanos Querier UI. Choose Log in with OpenShift to log in using your OpenShift Container Platform credentials. 7.2. Accessing third-party monitoring UIs by using the CLI You can obtain URLs for the Prometheus, Alertmanager, and Grafana web UIs by using the OpenShift CLI ( oc ) tool. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run the following to list routes for the openshift-monitoring project: USD oc -n openshift-monitoring get routes Example output NAME HOST/PORT ... alertmanager-main alertmanager-main-openshift-monitoring.apps._url_.openshift.com ... grafana grafana-openshift-monitoring.apps._url_.openshift.com ... prometheus-k8s prometheus-k8s-openshift-monitoring.apps._url_.openshift.com ... thanos-querier thanos-querier-openshift-monitoring.apps._url_.openshift.com ... Navigate to a HOST/PORT route by using a web browser. Select Log in with OpenShift to log in using your OpenShift credentials. Important The monitoring routes are managed by the Cluster Monitoring Operator and they cannot be modified by the user. 7.3. steps Exposing custom application metrics for autoscaling
|
[
"oc -n openshift-monitoring get routes",
"NAME HOST/PORT alertmanager-main alertmanager-main-openshift-monitoring.apps._url_.openshift.com grafana grafana-openshift-monitoring.apps._url_.openshift.com prometheus-k8s prometheus-k8s-openshift-monitoring.apps._url_.openshift.com thanos-querier thanos-querier-openshift-monitoring.apps._url_.openshift.com"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/monitoring/accessing-third-party-uis
|
Chapter 20. Restricting an application to trust only a subset of certificates
|
Chapter 20. Restricting an application to trust only a subset of certificates If your Identity Management (IdM) installation is configured with the integrated Certificate System (CS) certificate authority (CA), you are able to create lightweight sub-CAs. All sub-CAs you create are subordinated to the primary CA of the certificate system, the ipa CA. A lightweight sub-CA in this context means a sub-CA issuing certificates for a specific purpose . For example, a lightweight sub-CA enables you to configure a service, such as a virtual private network (VPN) gateway and a web browser, to accept only certificates issued by sub-CA A . By configuring other services to accept certificates only issued by sub-CA B , you prevent them from accepting certificates issued by sub-CA A , the primary CA, that is the ipa CA, and any intermediate sub-CA between the two. If you revoke the intermediate certificate of a sub-CA, all certificates issued by this sub-CA are automatically considered invalid by correctly configured clients. All the other certificates issued directly by the root CA, ipa , or another sub-CA, remain valid. This section uses the example of the Apache web server to illustrate how to restrict an application to trust only a subset of certificates. Complete this section to restrict the web server running on your IdM client to use a certificate issued by the webserver-ca IdM sub-CA, and to require the users to authenticate to the web server using user certificates issued by the webclient-ca IdM sub-CA. The steps you need to take are: Create an IdM sub-CA Download the sub-CA certificate from IdM WebUI Create a CA ACL specifying the correct combination of users, services and CAs, and the certificate profile used Request a certificate for the web service running on an IdM client from the IdM sub-CA Set up a single-instance Apache HTTP Server Add TLS encryption to the Apache HTTP Server Set the supported TLS protocol versions on an Apache HTTP Server Set the supported ciphers on the Apache HTTP Server Configure TLS client certificate authentication on the web server Request a certificate for the user from the IdM sub-CA and export it to the client Import the user certificate into the browser and configure the browser to trust the sub-CA certificate 20.1. Managing lightweight sub-CAs This section describes how to manage lightweight subordinate certificate authorities (sub-CAs). All sub-CAs you create are subordinated to the primary CA of the certificate system, the ipa CA. You can also disable and delete sub-CAs. Note If you delete a sub-CA, revocation checking for that sub-CA will no longer work. Only delete a sub-CA when there are no more certificates that were issued by that sub-CA whose notAfter expiration time is in the future. You should only disable sub-CAs while there are still non-expired certificates that were issued by that sub-CA. If all certificates that were issued by a sub-CA have expired, you can delete that sub-CA. You cannot disable or delete the IdM CA. For details on managing sub-CAs, see: Creating a sub-CA from the IdM WebUI Deleting a sub-CA from the IdM WebUI Creating a sub-CA from the IdM CLI Disabling a sub-CA from the IdM CLI Deleting a sub-CA from the IdM CLI 20.1.1. Creating a sub-CA from the IdM WebUI Follow this procedure to use the IdM WebUI to create new sub-CAs named webserver-ca and webclient-ca . Prerequisites Make sure you have obtained the administrator's credentials. Procedure In the Authentication menu, click Certificates . Select Certificate Authorities and click Add . Enter the name of the webserver-ca sub-CA. Enter the Subject DN, for example CN=WEBSERVER,O=IDM.EXAMPLE.COM , in the Subject DN field. Note that the Subject DN must be unique in the IdM CA infrastructure. Enter the name of the webclient-ca sub-CA. Enter the Subject DN CN=WEBCLIENT,O=IDM.EXAMPLE.COM in the Subject DN field. On the command line, run the ipa-certupdate command to create a certmonger tracking request for the webserver-ca and webclient-ca sub-CA certificates: Important Forgetting to run the ipa-certupdate command after creating a sub-CA means that if the sub-CA certificate expires, end-entity certificates issued by the sub-CA are considered invalid even if the end-entity certificate has not expired. Verification Verify that the signing certificate of the new sub-CA has been added to the IdM database: Note The new sub-CA certificate is automatically transferred to all the replicas that have a certificate system instance installed. 20.1.2. Deleting a sub-CA from the IdM WebUI Follow this procedure to delete lightweight sub-CAs in the IdM WebUI. Note If you delete a sub-CA, revocation checking for that sub-CA will no longer work. Only delete a sub-CA when there are no more certificates that were issued by that sub-CA whose notAfter expiration time is in the future. You should only disable sub-CAs while there are still non-expired certificates that were issued by that sub-CA. If all certificates that were issued by a sub-CA have expired, you can delete that sub-CA. You cannot disable or delete the IdM CA. Prerequisites Make sure you have obtained the administrator's credentials. You have disabled the sub-CA in the IdM CLI. See Disabling a sub-CA from the IdM CLI Procedure In the IdM WebUI, open the Authentication tab, and select the Certificates subtab. Select Certificate Authorities . Select the sub-CA to remove and click Delete . Figure 20.1. Deleting a sub-CA in the IdM Web UI Click Delete to confirm. The sub-CA is removed from the list of Certificate Authorities . 20.1.3. Creating a sub-CA from the IdM CLI Follow this procedure to use the IdM CLI to create new sub-CAs named webserver-ca and webclient-ca . Prerequisites Make sure that you have obtained the administrator's credentials. Make sure you are logged in to an IdM server that is a CA server. Procedure Enter the ipa ca-add command, and specify the name of the webserver-ca sub-CA and its Subject Distinguished Name (DN): Name Name of the CA. Authority ID Automatically created, individual ID for the CA. Subject DN Subject Distinguished Name (DN). The Subject DN must be unique in the IdM CA infrastructure. Issuer DN Parent CA that issued the sub-CA certificate. All sub-CAs are created as a child of the IdM root CA. Create the webclient-ca sub-CA for issuing certificates to web clients: Run the ipa-certupdate command to create a certmonger tracking request for the webserver-ca and webclient-ca sub-CAs certificates: Important If you forget to run the ipa-certupdate command after creating a sub-CA and the sub-CA certificate expires, end-entity certificates issued by that sub-CA are considered invalid even though the end-entity certificate has not expired. Verification Verify that the signing certificate of the new sub-CA has been added to the IdM database: Note The new sub-CA certificate is automatically transferred to all the replicas that have a certificate system instance installed. 20.1.4. Disabling a sub-CA from the IdM CLI Follow this procedure to disable a sub-CA from the IdM CLI. If there are still non-expired certificates that were issued by a sub-CA, you should not delete it but you can disable it. If you delete the sub-CA, revocation checking for that sub-CA will no longer work. Prerequisites Make sure you have obtained the administrator's credentials. Procedure Run the ipa ca-find command to determine the name of the sub-CA you are deleting: Run the ipa ca-disable command to disable your sub-CA, in this example, the webserver-ca : 20.1.5. Deleting a sub-CA from the IdM CLI Follow this procedure to delete lightweight sub-CAs from the IdM CLI. Note If you delete a sub-CA, revocation checking for that sub-CA will no longer work. Only delete a sub-CA when there are no more certificates that were issued by that sub-CA whose notAfter expiration time is in the future. You should only disable sub-CAs while there are still non-expired certificates that were issued by that sub-CA. If all certificates that were issued by a sub-CA have expired, you can delete that sub-CA. You cannot disable or delete the IdM CA. Prerequisites Make sure you have obtained the administrator's credentials. Procedure To display a list of sub-CAs and CAs, run the ipa ca-find command: Run the ipa ca-disable command to disable your sub-CA, in this example, the webserver-ca : Delete the sub-CA, in this example, the webserver-ca : Verification Run ipa ca-find to display the list of CAs and sub-CAs. The webserver-ca is no longer on the list. 20.2. Downloading the sub-CA certificate from IdM WebUI Prerequisites Make sure that you have obtained the IdM administrator's credentials. Procedure In the Authentication menu, click Certificates > Certificates . Figure 20.2. sub-CA certificate in the list of certificates Click the serial number of the sub-CA certificate to open the certificate information page. In the certificate information page, click Actions > Download . In the CLI, move the sub-CA certificate to the /etc/pki/tls/private/ directory: 20.3. Creating CA ACLs for web server and client authentication Certificate authority access control list (CA ACL) rules define which profiles can be used to issue certificates to which users, services, or hosts. By associating profiles, principals, and groups, CA ACLs permit principals or groups to request certificates using particular profiles. For example, using CA ACLs, the administrator can restrict the use of a profile intended for employees working from an office located in London only to users that are members of the London office-related group. 20.3.1. Viewing CA ACLs in IdM CLI Follow this procedure to view the list of certificate authority access control lists (CA ACLs) available in your IdM deployment and the details of a specific CA ACL. Procedure To view all the CA ACLs in your IdM environment, enter the ipa caacl-find command: To view the details of a CA ACL, enter the ipa caacl-show command, and specify the CA ACL name. For example, to view the details of the hosts_services_caIPAserviceCert CA ACL, enter: 20.3.2. Creating a CA ACL for web servers authenticating to web clients using certificates issued by webserver-ca Follow this procedure to create a CA ACL that requires the system administrator to use the webserver-ca sub-CA and the caIPAserviceCert profile when requesting a certificate for the HTTP/[email protected] service. If the user requests a certificate from a different sub-CA or of a different profile, the request fails. The only exception is when there is another matching CA ACL that is enabled. To view the available CA ACLs, see Viewing CA ACLs in IdM CLI . Prerequisites Make sure that the HTTP/[email protected] service is part of IdM. Make sure you have obtained IdM administrator's credentials. Procedure Create a CA ACL using the ipa caacl command, and specify its name: Modify the CA ACL using the ipa caacl-mod command to specify the description of the CA ACL: Add the webserver-ca sub-CA to the CA ACL: Use the ipa caacl-add-service to specify the service whose principal will be able to request a certificate: Use the ipa caacl-add-profile command to specify the certificate profile for the requested certificate: You can use the newly-created CA ACL straight away. It is enabled after its creation by default. Note The point of CA ACLs is to specify which CA and profile combinations are allowed for requests coming from particular principals or groups. CA ACLs do not affect certificate validation or trust. They do not affect how the issued certificates will be used. 20.3.3. Creating a CA ACL for user web browsers authenticating to web servers using certificates issued by webclient-ca Follow this procedure to create a CA ACL that requires the system administrator to use the webclient-ca sub-CA and the IECUserRoles profile when requesting a certificate. If the user requests a certificate from a different sub-CA or of a different profile, the request fails. The only exception is when there is another matching CA ACL that is enabled. To view the available CA ACLs, see Viewing CA ACLs in IdM CLI . Prerequisites Make sure that you have obtained IdM administrator's credentials. Procedure Create a CA ACL using the ipa caacl command and specify its name: Modify the CA ACL using the ipa caacl-mod command to specify the description of the CA ACL: Add the webclient-ca sub-CA to the CA ACL: Use the ipa caacl-add-profile command to specify the certificate profile for the requested certificate: Modify the CA ACL using the ipa caacl-mod command to specify that the CA ACL applies to all IdM users: You can use the newly-created CA ACL straight away. It is enabled after its creation by default. Note The point of CA ACLs is to specify which CA and profile combinations are allowed for requests coming from particular principals or groups. CA ACLs do not affect certificate validation or trust. They do not affect how the issued certificates will be used. 20.4. Obtaining an IdM certificate for a service using certmonger To ensure that communication between browsers and the web service running on your IdM client is secure and encrypted, use a TLS certificate. If you want to restrict web browsers to trust certificates issued by the webserver-ca sub-CA but no other IdM sub-CA, obtain the TLS certificate for your web service from the webserver-ca sub-CA. Follow this procedure to use certmonger to obtain an IdM certificate for a service ( HTTP/my_company.idm.example.com @ IDM.EXAMPLE.COM ) running on an IdM client. Using certmonger to request the certificate automatically means that certmonger manages and renews the certificate when it is due for a renewal. For a visual representation of what happens when certmonger requests a service certificate, see Communication flow for certmonger requesting a service certificate . Prerequisites The web server is enrolled as an IdM client. You have root access to the IdM client on which you are running the procedure. The service for which you are requesting a certificate does not have to pre-exist in IdM. Procedure On the my_company.idm.example.com IdM client on which the HTTP service is running, request a certificate for the service corresponding to the HTTP/[email protected] principal, and specify that The certificate is to be stored in the local /etc/pki/tls/certs/httpd.pem file The private key is to be stored in the local /etc/pki/tls/private/httpd.key file The webserver-ca sub-CA is to be the issuing certificate authority That an extensionRequest for a SubjectAltName be added to the signing request with the DNS name of my_company.idm.example.com : In the command above: The ipa-getcert request command specifies that the certificate is to be obtained from the IdM CA. The ipa-getcert request command is a shortcut for getcert request -c IPA . The -g option specifies the size of key to be generated if one is not already in place. The -D option specifies the SubjectAltName DNS value to be added to the request. The -X option specifies that the issuer of the certificate must be webserver-ca , not ipa . The -C option instructs certmonger to restart the httpd service after obtaining the certificate. To specify that the certificate be issued with a particular profile, use the -T option. Note RHEL 8 uses a different SSL module in Apache than the one used in RHEL 7. The SSL module relies on OpenSSL rather than NSS. For this reason, in RHEL 8 you cannot use an NSS database to store the HTTPS certificate and the private key. Optional: To check the status of your request: The output shows that the request is in the MONITORING status, which means that a certificate has been obtained. The locations of the key pair and the certificate are those requested. 20.5. Communication flow for certmonger requesting a service certificate These diagrams show the stages of what happens when certmonger requests a service certificate from Identity Management (IdM) certificate authority (CA) server. The sequence consists of these diagrams: Unencrypted communication Certmonger requesting a service certificate IdM CA issuing the service certificate Certmonger applying the service certificate Certmonger requesting a new certificate when the old one is nearing expiration In the diagrams, the webserver-ca sub-CA is represented by the generic IdM CA server . Unencrypted communication shows the initial situation: without an HTTPS certificate, the communication between the web server and the browser is unencrypted. Figure 20.3. Unencrypted communication Certmonger requesting a service certificate shows the system administrator using certmonger to manually request an HTTPS certificate for the Apache web server. Note that when requesting a web server certificate, certmonger does not communicate directly with the CA. It proxies through IdM. Figure 20.4. Certmonger requesting a service certificate IdM CA issuing the service certificate shows an IdM CA issuing an HTTPS certificate for the web server. Figure 20.5. IdM CA issuing the service certificate Certmonger applying the service certificate shows certmonger placing the HTTPS certificate in appropriate locations on the IdM client and, if instructed to do so, restarting the httpd service. The Apache server subsequently uses the HTTPS certificate to encrypt the traffic between itself and the browser. Figure 20.6. Certmonger applying the service certificate Certmonger requesting a new certificate when the old one is nearing expiration shows certmonger automatically requesting a renewal of the service certificate from the IdM CA before the expiration of the certificate. The IdM CA issues a new certificate. Figure 20.7. Certmonger requesting a new certificate when the old one is nearing expiration 20.6. Setting up a single-instance Apache HTTP Server You can set up a single-instance Apache HTTP Server to serve static HTML content. Follow the procedure if the web server should provide the same content for all domains associated with the server. If you want to provide different content for different domains, set up name-based virtual hosts. For details, see Configuring Apache name-based virtual hosts . Procedure Install the httpd package: If you use firewalld , open the TCP port 80 in the local firewall: Enable and start the httpd service: Optional: Add HTML files to the /var/www/html/ directory. Note When adding content to /var/www/html/ , files and directories must be readable by the user under which httpd runs by default. The content owner can be the either the root user and root user group, or another user or group of the administrator's choice. If the content owner is the root user and root user group, the files must be readable by other users. The SELinux context for all the files and directories must be httpd_sys_content_t , which is applied by default to all content within the /var/www directory. Verification Connect with a web browser to http://my_company.idm.example.com/ or http:// server_IP / . If the /var/www/html/ directory is empty or does not contain an index.html or index.htm file, Apache displays the Red Hat Enterprise Linux Test Page . If /var/www/html/ contains HTML files with a different name, you can load them by entering the URL to that file, such as http:// server_IP / example.html or http://my_company.idm.example.com/ example.html . Additional resources Apache manual: Installing the Apache HTTP Server manual . See the httpd.service(8) man page on your system. 20.7. Adding TLS encryption to an Apache HTTP Server You can enable TLS encryption on the my_company.idm.example.com Apache HTTP Server for the idm.example.com domain. Prerequisites The my_company.idm.example.com Apache HTTP Server is installed and running. You have obtained the TLS certificate from the webserver-ca sub-CA, and stored it in the /etc/pki/tls/certs/httpd.pem file as described in Obtaining an IdM certificate for a service using certmonger . If you use a different path, adapt the corresponding steps of the procedure. The corresponding private key is stored in the /etc/pki/tls/private/httpd.key file. If you use a different path, adapt the corresponding steps of the procedure. The webserver-ca CA certificate is stored in the /etc/pki/tls/private/sub-ca.crt file. If you use a different path, adapt the corresponding steps of the procedure. Clients and the my_company.idm.example.com web server resolve the host name of the server to the IP address of the web server. Procedure Install the mod_ssl package: Edit the /etc/httpd/conf.d/ssl.conf file and add the following settings to the <VirtualHost _default_:443> directive: Set the server name: Important The server name must match the entry set in the Common Name field of the certificate. Optional: If the certificate contains additional host names in the Subject Alt Names (SAN) field, you can configure mod_ssl to provide TLS encryption also for these host names. To configure this, add the ServerAliases parameter with corresponding names: Set the paths to the private key, the server certificate, and the CA certificate: For security reasons, configure that only the root user can access the private key file: Warning If the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure. If you use firewalld , open port 443 in the local firewall: Restart the httpd service: Note If you protected the private key file with a password, you must enter this password each time when the httpd service starts. Use a browser and connect to https://my_company.idm.example.com . Additional resources SSL/TLS Encryption . Security considerations for TLS in RHEL 8 20.8. Setting the supported TLS protocol versions on an Apache HTTP Server By default, the Apache HTTP Server on RHEL uses the system-wide crypto policy that defines safe default values, which are also compatible with recent browsers. For example, the DEFAULT policy defines that only the TLSv1.2 and TLSv1.3 protocol versions are enabled in apache. You can manually configure which TLS protocol versions your my_company.idm.example.com Apache HTTP Server supports. Follow the procedure if your environment requires to enable only specific TLS protocol versions, for example: If your environment requires that clients can also use the weak TLS1 (TLSv1.0) or TLS1.1 protocol. If you want to configure that Apache only supports the TLSv1.2 or TLSv1.3 protocol. Prerequisites TLS encryption is enabled on the my_company.idm.example.com server as described in Adding TLS encryption to an Apache HTTP server . Procedure Edit the /etc/httpd/conf/httpd.conf file, and add the following setting to the <VirtualHost> directive for which you want to set the TLS protocol version. For example, to enable only the TLSv1.3 protocol: Restart the httpd service: Verification Use the following command to verify that the server supports TLSv1.3 : Use the following command to verify that the server does not support TLSv1.2 : If the server does not support the protocol, the command returns an error: Optional: Repeat the command for other TLS protocol versions. Additional resources update-crypto-policies(8) man page on your system Using system-wide cryptographic policies . For further details about the SSLProtocol parameter, refer to the mod_ssl documentation in the Apache manual: Installing the Apache HTTP Server manual . 20.9. Setting the supported ciphers on an Apache HTTP Server By default, the Apache HTTP Server uses the system-wide crypto policy that defines safe default values, which are also compatible with recent browsers. For the list of ciphers the system-wide crypto allows, see the /etc/crypto-policies/back-ends/openssl.config file. You can manually configure which ciphers the my_company.idm.example.com Apache HTTP server supports. Follow the procedure if your environment requires specific ciphers. Prerequisites TLS encryption is enabled on the my_company.idm.example.com server as described in Adding TLS encryption to an Apache HTTP server . Procedure Edit the /etc/httpd/conf/httpd.conf file, and add the SSLCipherSuite parameter to the <VirtualHost> directive for which you want to set the TLS ciphers: This example enables only the EECDH+AESGCM , EDH+AESGCM , AES256+EECDH , and AES256+EDH ciphers and disables all ciphers which use the SHA1 and SHA256 message authentication code (MAC). Restart the httpd service: Verification To display the list of ciphers the Apache HTTP Server supports: Install the nmap package: Use the nmap utility to display the supported ciphers: Additional resources update-crypto-policies(8) man page on your system Using system-wide cryptographic policies . Installing the Apache HTTP Server manual - SSLCipherSuite 20.10. Configuring TLS client certificate authentication Client certificate authentication enables administrators to allow only users who authenticate using a certificate to access resources on the my_company.idm.example.com web server. You can configure client certificate authentication for the /var/www/html/Example/ directory. Important If the my_company.idm.example.com Apache server uses the TLS 1.3 protocol, certain clients require additional configuration. For example, in Firefox, set the security.tls.enable_post_handshake_auth parameter in the about:config menu to true . For further details, see Transport Layer Security version 1.3 in Red Hat Enterprise Linux 8 . Prerequisites TLS encryption is enabled on the my_company.idm.example.com server as described in Adding TLS encryption to an Apache HTTP server . Procedure Edit the /etc/httpd/conf/httpd.conf file and add the following settings to the <VirtualHost> directive for which you want to configure client authentication: The SSLVerifyClient require setting defines that the server must successfully validate the client certificate before the client can access the content in the /var/www/html/Example/ directory. Restart the httpd service: Verification Use the curl utility to access the https://my_company.idm.example.com/Example/ URL without client authentication: The error indicates that the my_company.idm.example.com web server requires a client certificate authentication. Pass the client private key and certificate, as well as the CA certificate to curl to access the same URL with client authentication: If the request succeeds, curl displays the index.html file stored in the /var/www/html/Example/ directory. Additional resources Installing the Apache HTTP Server manual - mod_ssl configuration 20.11. Requesting a new user certificate and exporting it to the client As an Identity Management (IdM) administrator, you can configure a web server running on an IdM client to request users that use web browsers to access the server to authenticate with certificates issued by a specific IdM sub-CA. Follow this procedure to request a user certificate from a specific IdM sub-CA and to export the certificate and the corresponding private key on to the host from which the user wants to access the web server using a web browser. Afterwards, import the certificate and the private key into the browser . Procedure Optional: Create a new directory, for example ~/certdb/ , and make it a temporary certificate database. When asked, create an NSS Certificate DB password to encrypt the keys to the certificate to be generated in a subsequent step: Create the certificate signing request (CSR) and redirect the output to a file. For example, to create a CSR with the name certificate_request.csr for a 4096 bit certificate for the idm_user user in the IDM.EXAMPLE.COM realm, setting the nickname of the certificate private keys to idm_user for easy findability, and setting the subject to CN=idm_user,O=IDM.EXAMPLE.COM : When prompted, enter the same password that you entered when using certutil to create the temporary database. Then continue typing randlomly until told to stop: Submit the certificate request file to the server. Specify the Kerberos principal to associate with the newly-issued certificate, the output file to store the certificate, and optionally the certificate profile. Specify the IdM sub-CA that you want to issue the certificate. For example, to obtain a certificate of the IECUserRoles profile, a profile with added user roles extension, for the idm_user @ IDM.EXAMPLE.COM principal from webclient-ca , and save the certificate in the ~/idm_user.pem file: Add the certificate to the NSS database. Use the -n option to set the same nickname that you used when creating the CSR previously so that the certificate matches the private key in the NSS database. The -t option sets the trust level. For details, see the certutil(1) man page. The -i option specifies the input certificate file. For example, to add to the NSS database a certificate with the idm_user nickname that is stored in the ~/idm_user.pem file in the ~/certdb/ database: Verify that the key in the NSS database does not show (orphan) as its nickname. For example, to verify that the certificate stored in the ~/certdb/ database is not orphaned: Use the pk12util command to export the certificate from the NSS database to the PKCS12 format. For example, to export the certificate with the idm_user nickname from the /root/certdb NSS database into the ~/idm_user.p12 file: Transfer the certificate to the host on which you want the certificate authentication for idm_user to be enabled: On the host to which the certificate has been transferred, make the directory in which the .pkcs12 file is stored inaccessible to the 'other' group for security reasons: For security reasons, remove the temporary NSS database and the .pkcs12 file from the server: 20.12. Configuring a browser to enable certificate authentication To be able to authenticate with a certificate when using the WebUI to log into Identity Management (IdM), you need to import the user and the relevant certificate authority (CA) certificates into the Mozilla Firefox or Google Chrome browser. The host itself on which the browser is running does not have to be part of the IdM domain. IdM supports the following browsers for connecting to the WebUI: Mozilla Firefox 38 and later Google Chrome 46 and later The following procedure shows how to configure the Mozilla Firefox 57.0.1 browser. Prerequisites You have the user certificate that you want to import to the browser at your disposal in the PKCS#12 format. You have downloaded the sub-CA certificate and have it at your disposal in the PEM format. Procedure Open Firefox, then navigate to Preferences Privacy & Security . Figure 20.8. Privacy and Security section in Preferences Click View Certificates . Figure 20.9. View Certificates in Privacy and Security In the Your Certificates tab, click Import . Locate and open the certificate of the user in the PKCS12 format, then click OK and OK . To make sure that your IdM sub-CA is recognized by Firefox as a trusted authority, import the IdM sub-CA certificate that you saved in Downloading the sub-CA certificate from IdM WebUI as a trusted certificate authority certificate: Open Firefox, navigate to Preferences and click Privacy & Security . Figure 20.10. Privacy and Security section in Preferences Click View Certificates . Figure 20.11. View Certificates in Privacy and Security In the Authorities tab, click Import . Locate and open the sub-CA certificate. Trust the certificate to identify websites, then click OK and OK .
|
[
"ipa-certupdate",
"certutil -d /etc/pki/pki-tomcat/alias/ -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI caSigningCert cert-pki-ca CTu,Cu,Cu Server-Cert cert-pki-ca u,u,u auditSigningCert cert-pki-ca u,u,Pu caSigningCert cert-pki-ca ba83f324-5e50-4114-b109-acca05d6f1dc u,u,u ocspSigningCert cert-pki-ca u,u,u subsystemCert cert-pki-ca u,u,u",
"ipa ca-add webserver-ca --subject=\" CN=WEBSERVER,O=IDM.EXAMPLE.COM \" ------------------- Created CA \"webserver-ca\" ------------------- Name: webserver-ca Authority ID: ba83f324-5e50-4114-b109-acca05d6f1dc Subject DN: CN=WEBSERVER,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IDM.EXAMPLE.COM",
"ipa ca-add webclient-ca --subject=\" CN=WEBCLIENT,O=IDM.EXAMPLE.COM \" ------------------- Created CA \"webclient-ca\" ------------------- Name: webclient-ca Authority ID: 8a479f3a-0454-4a4d-8ade-fd3b5a54ab2e Subject DN: CN=WEBCLIENT,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IDM.EXAMPLE.COM",
"ipa-certupdate",
"certutil -d /etc/pki/pki-tomcat/alias/ -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI caSigningCert cert-pki-ca CTu,Cu,Cu Server-Cert cert-pki-ca u,u,u auditSigningCert cert-pki-ca u,u,Pu caSigningCert cert-pki-ca ba83f324-5e50-4114-b109-acca05d6f1dc u,u,u ocspSigningCert cert-pki-ca u,u,u subsystemCert cert-pki-ca u,u,u",
"ipa ca-find ------------- 3 CAs matched ------------- Name: ipa Description: IPA CA Authority ID: 5195deaf-3b61-4aab-b608-317aff38497c Subject DN: CN=Certificate Authority,O=IPA.TEST Issuer DN: CN=Certificate Authority,O=IPA.TEST Name: webclient-ca Authority ID: 605a472c-9c6e-425e-b959-f1955209b092 Subject DN: CN=WEBCLIENT,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IPA.TEST Name: webserver-ca Authority ID: 02d537f9-c178-4433-98ea-53aa92126fc3 Subject DN: CN=WEBSERVER,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IPA.TEST ---------------------------- Number of entries returned 3 ----------------------------",
"ipa ca-disable webserver-ca -------------------------- Disabled CA \"webserver-ca\" --------------------------",
"ipa ca-find ------------- 3 CAs matched ------------- Name: ipa Description: IPA CA Authority ID: 5195deaf-3b61-4aab-b608-317aff38497c Subject DN: CN=Certificate Authority,O=IPA.TEST Issuer DN: CN=Certificate Authority,O=IPA.TEST Name: webclient-ca Authority ID: 605a472c-9c6e-425e-b959-f1955209b092 Subject DN: CN=WEBCLIENT,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IPA.TEST Name: webserver-ca Authority ID: 02d537f9-c178-4433-98ea-53aa92126fc3 Subject DN: CN=WEBSERVER,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IPA.TEST ---------------------------- Number of entries returned 3 ----------------------------",
"ipa ca-disable webserver-ca -------------------------- Disabled CA \"webserver-ca\" --------------------------",
"ipa ca-del webserver-ca ------------------------- Deleted CA \"webserver-ca\" -------------------------",
"ipa ca-find ------------- 2 CAs matched ------------- Name: ipa Description: IPA CA Authority ID: 5195deaf-3b61-4aab-b608-317aff38497c Subject DN: CN=Certificate Authority,O=IPA.TEST Issuer DN: CN=Certificate Authority,O=IPA.TEST Name: webclient-ca Authority ID: 605a472c-9c6e-425e-b959-f1955209b092 Subject DN: CN=WEBCLIENT,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IPA.TEST ---------------------------- Number of entries returned 2 ----------------------------",
"mv path/to/the/downloaded/certificate /etc/pki/tls/private/sub-ca.crt",
"ipa caacl-find ----------------- 1 CA ACL matched ----------------- ACL name: hosts_services_caIPAserviceCert Enabled: TRUE",
"ipa caacl-show hosts_services_caIPAserviceCert ACL name: hosts_services_caIPAserviceCert Enabled: TRUE Host category: all Service category: all CAs: ipa Profiles: caIPAserviceCert Users: admin",
"ipa caacl-add TLS_web_server_authentication -------------------------------------------- Added CA ACL \"TLS_web_server_authentication\" -------------------------------------------- ACL name: TLS_web_server_authentication Enabled: TRUE",
"ipa caacl-mod TLS_web_server_authentication --desc=\"CAACL for web servers authenticating to web clients using certificates issued by webserver-ca\" ----------------------------------------------- Modified CA ACL \"TLS_web_server_authentication\" ----------------------------------------------- ACL name: TLS_web_server_authentication Description: CAACL for web servers authenticating to web clients using certificates issued by webserver-ca Enabled: TRUE",
"ipa caacl-add-ca TLS_web_server_authentication --ca=webserver-ca ACL name: TLS_web_server_authentication Description: CAACL for web servers authenticating to web clients using certificates issued by webserver-ca Enabled: TRUE CAs: webserver-ca ------------------------- Number of members added 1 -------------------------",
"ipa caacl-add-service TLS_web_server_authentication --service=HTTP/[email protected] ACL name: TLS_web_server_authentication Description: CAACL for web servers authenticating to web clients using certificates issued by webserver-ca Enabled: TRUE CAs: webserver-ca Services: HTTP/[email protected] ------------------------- Number of members added 1 -------------------------",
"ipa caacl-add-profile TLS_web_server_authentication --certprofiles=caIPAserviceCert ACL name: TLS_web_server_authentication Description: CAACL for web servers authenticating to web clients using certificates issued by webserver-ca Enabled: TRUE CAs: webserver-ca Profiles: caIPAserviceCert Services: HTTP/[email protected] ------------------------- Number of members added 1 -------------------------",
"ipa caacl-add TLS_web_client_authentication -------------------------------------------- Added CA ACL \"TLS_web_client_authentication\" -------------------------------------------- ACL name: TLS_web_client_authentication Enabled: TRUE",
"ipa caacl-mod TLS_web_client_authentication --desc=\"CAACL for user web browsers authenticating to web servers using certificates issued by webclient-ca\" ----------------------------------------------- Modified CA ACL \"TLS_web_client_authentication\" ----------------------------------------------- ACL name: TLS_web_client_authentication Description: CAACL for user web browsers authenticating to web servers using certificates issued by webclient-ca Enabled: TRUE",
"ipa caacl-add-ca TLS_web_client_authentication --ca=webclient-ca ACL name: TLS_web_client_authentication Description: CAACL for user web browsers authenticating to web servers using certificates issued by webclient-ca Enabled: TRUE CAs: webclient-ca ------------------------- Number of members added 1 -------------------------",
"ipa caacl-add-profile TLS_web_client_authentication --certprofiles=IECUserRoles ACL name: TLS_web_client_authentication Description: CAACL for user web browsers authenticating to web servers using certificates issued by webclient-ca Enabled: TRUE CAs: webclient-ca Profiles: IECUserRoles ------------------------- Number of members added 1 -------------------------",
"ipa caacl-mod TLS_web_client_authentication --usercat=all ----------------------------------------------- Modified CA ACL \"TLS_web_client_authentication\" ----------------------------------------------- ACL name: TLS_web_client_authentication Description: CAACL for user web browsers authenticating to web servers using certificates issued by webclient-ca Enabled: TRUE User category: all CAs: webclient-ca Profiles: IECUserRoles",
"ipa-getcert request -K HTTP/my_company.idm.example.com -k /etc/pki/tls/private/httpd.key -f /etc/pki/tls/certs/httpd.pem -g 2048 -D my_company.idm.example.com -X webserver-ca -C \"systemctl restart httpd\" New signing request \"20190604065735\" added.",
"ipa-getcert list -f /etc/pki/tls/certs/httpd.pem Number of certificates and requests being tracked: 3. Request ID '20190604065735': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/httpd.key' certificate: type=FILE,location='/etc/pki/tls/certs/httpd.crt' CA: IPA issuer: CN=WEBSERVER,O=IDM.EXAMPLE.COM [...]",
"yum install httpd",
"firewall-cmd --permanent --add-port=80/tcp firewall-cmd --reload",
"systemctl enable --now httpd",
"yum install mod_ssl",
"ServerName my_company.idm.example.com",
"ServerAlias www.my_company.idm.example.com server.my_company.idm.example.com",
"SSLCertificateKeyFile \"/etc/pki/tls/private/httpd.key\" SSLCertificateFile \"/etc/pki/tls/certs/httpd.pem\" SSLCACertificateFile \"/etc/pki/tls/certs/ca.crt\"",
"chown root:root /etc/pki/tls/private/httpd.key chmod 600 //etc/pki/tls/private/httpd.key",
"firewall-cmd --permanent --add-port=443/tcp firewall-cmd --reload",
"systemctl restart httpd",
"SSLProtocol -All TLSv1.3",
"systemctl restart httpd",
"openssl s_client -connect example.com :443 -tls1_3",
"openssl s_client -connect example.com :443 -tls1_2",
"140111600609088:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1543:SSL alert number 70",
"SSLCipherSuite \"EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!SHA1:!SHA256\"",
"systemctl restart httpd",
"yum install nmap",
"nmap --script ssl-enum-ciphers -p 443 example.com PORT STATE SERVICE 443/tcp open https | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A | TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) - A | TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A",
"<Directory \"/var/www/html/Example/\"> SSLVerifyClient require </Directory>",
"systemctl restart httpd",
"curl https://my_company.idm.example.com/Example/ curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0",
"curl --cacert ca.crt --key client.key --cert client.crt https://my_company.idm.example.com/Example/",
"mkdir ~/certdb/ certutil -N -d ~/certdb/ Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password:",
"certutil -R -d ~/certdb/ -a -g 4096 -n idm_user -s \"CN= idm_user ,O=IDM.EXAMPLE.COM\" > certificate_request.csr",
"Enter Password or Pin for \"NSS Certificate DB\": A random seed must be generated that will be used in the creation of your key. One of the easiest ways to create a random seed is to use the timing of keystrokes on a keyboard. To begin, type keys on the keyboard until this progress meter is full. DO NOT USE THE AUTOREPEAT FUNCTION ON YOUR KEYBOARD! Continue typing until the progress meter is full:",
"ipa cert-request certificate_request.csr --principal= idm_user @ IDM.EXAMPLE.COM --profile-id= IECUserRoles --ca= webclient-ca --certificate-out= ~/idm_user.pem",
"certutil -A -d ~/certdb/ -n idm_user -t \"P,,\" -i ~/idm_user.pem",
"certutil -K -d ~/certdb/ < 0> rsa 5ad14d41463b87a095b1896cf0068ccc467df395 NSS Certificate DB:idm_user",
"pk12util -d ~/certdb -o ~/idm_user.p12 -n idm_user Enter Password or Pin for \"NSS Certificate DB\": Enter password for PKCS12 file: Re-enter password: pk12util: PKCS12 EXPORT SUCCESSFUL",
"scp ~/idm_user.p12 [email protected]:/home/idm_user/",
"chmod o-rwx /home/idm_user/",
"rm ~/certdb/ rm ~/idm_user.p12"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/restricting-an-application-to-trust-only-a-subset-of-certificates_working-with-idm-certificates
|
Chapter 31. Is Tombstone Filter Action
|
Chapter 31. Is Tombstone Filter Action Filter based on the presence of body or not 31.1. Configuration Options The is-tombstone-filter-action Kamelet does not specify any configuration option. 31.2. Dependencies At runtime, the is-tombstone-filter-action Kamelet relies upon the presence of the following dependencies: camel:core camel:kamelet 31.3. Usage This section describes how you can use the is-tombstone-filter-action . 31.3.1. Knative Action You can use the is-tombstone-filter-action Kamelet as an intermediate step in a Knative binding. is-tombstone-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: is-tombstone-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: is-tombstone-filter-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 31.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 31.3.1.2. Procedure for using the cluster CLI Save the is-tombstone-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f is-tombstone-filter-action-binding.yaml 31.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step is-tombstone-filter-action channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 31.3.2. Kafka Action You can use the is-tombstone-filter-action Kamelet as an intermediate step in a Kafka binding. is-tombstone-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: is-tombstone-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: is-tombstone-filter-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 31.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 31.3.2.2. Procedure for using the cluster CLI Save the is-tombstone-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f is-tombstone-filter-action-binding.yaml 31.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step is-tombstone-filter-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 31.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/is-tombstone-filter-action.kamelet.yaml
|
[
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: is-tombstone-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: is-tombstone-filter-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f is-tombstone-filter-action-binding.yaml",
"kamel bind timer-source?message=Hello --step is-tombstone-filter-action channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: is-tombstone-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: is-tombstone-filter-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f is-tombstone-filter-action-binding.yaml",
"kamel bind timer-source?message=Hello --step is-tombstone-filter-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/is-tombstone-filter-action
|
Building applications
|
Building applications OpenShift Container Platform 4.9 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team
|
[
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"spec: customization: projectAccess: availableClusterRoles: - admin - edit - view",
"oc status",
"oc delete project <project_name>",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected].",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres-ha:centos8-13.4-0 postgresVersion: 13 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.33-2 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi - name: repo2 volume: volumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi proxy: pgBouncer: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbouncer:centos8-1.15-2 EOD",
"postgrescluster.postgres-operator.crunchydata.com/hippo created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE hippo-backup-nqjg-2rq94 1/1 Running 0 35s hippo-instance1-nw92-0 3/3 Running 0 112s hippo-pgbouncer-57b98f4476-znsk5 2/2 Running 0 112s hippo-repo-host-0 1/1 Running 0 112s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s",
"for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'USDi' -exec echo -n {}:\" \" \\; -exec cat {} \\;'; echo; done",
"/bindings/spring-petclinic-pgcluster/username: hippo /bindings/spring-petclinic-pgcluster/password: KXKF{nAI,I-J6zLt:W+FKnze /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"oc apply -f - << EOD --- apiVersion: v1 kind: Namespace metadata: name: my-petclinic --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: postgres-operator-group namespace: my-petclinic --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-multiarch-catalog namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/ibm/operator-registry-<architecture> 1 imagePullPolicy: IfNotPresent displayName: ibm-multiarch-catalog updateStrategy: registryPoll: interval: 30m --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: postgresql-operator-dev4devs-com namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: postgresql-operator-dev4devs-com source: ibm-multiarch-catalog sourceNamespace: openshift-marketplace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: database-view labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgresql.dev4devs.com resources: - databases verbs: - get - list EOD",
"oc get subs -n openshift-operators",
"NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable",
"oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: \"5432\" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: \"sampledb\" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: \"samplepwd\" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: \"sampleuser\" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD",
"database.postgresql.dev4devs.com/sampledatabase created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: \"true\" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: status: binding: name: hippo-pguser-hippo",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"MTBz\" user: \"Z3Vlc3Q=\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"example.com\" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments",
"apiVersion: servicebinding.io/v1alpha3 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service application: apiVersion: apps/v1 kind: Deployment name: spring-petclinic",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: servicebinding.io/v1alpha3 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"MTBz\" user: \"Z3Vlc3Q=\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"service.binding(/<NAME>)?: \"<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)\"",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/type\": \"postgresql\" 1",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"MTBz\" user: \"Z3Vlc3Q=\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/uri\": \"path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url\" spec: status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com",
"status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/tags\": \"path={.spec.tags},elementType=sliceOfStrings\" spec: tags: - knowledge - is - power",
"/bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power",
"spec: tags: - knowledge - is - power",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/url\": \"path={.spec.connections},elementType=sliceOfStrings,sourceValue=url\" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com",
"USDSERVICE_BINDING_ROOT 1 ├── account-database 2 │ ├── type 3 │ ├── provider 4 │ ├── uri │ ├── username │ └── password └── transaction-event-stream 5 ├── type ├── connection-count ├── uri ├── certificates └── private-key",
"import os username = os.getenv(\"USERNAME\") password = os.getenv(\"PASSWORD\")",
"from pyservicebinding import binding try: sb = binding.ServiceBinding() except binding.ServiceBindingRootMissingError as msg: # log the error message and retry/exit print(\"SERVICE_BINDING_ROOT env var not set\") sb = binding.ServiceBinding() bindings_list = sb.bindings(\"postgresql\")",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments",
"host: hippo-pgbouncer port: 5432",
"DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432",
"application: name: spring-petclinic group: apps version: v1 resource: deployments",
"services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo",
"DATABASE_HOST: hippo-pgbouncer",
"POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: \"31793\" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {}",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: \"\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1",
"oc delete ServiceBinding <.metadata.name>",
"oc delete ServiceBinding spring-petclinic-pgcluster",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"triggers: - type: \"ConfigChange\"",
"triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/ <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> The following example shows two containers that reference image streams in the openshift namespace. One container handles the JNLP contract for launching Pods as Jenkins Agents. The other container uses an image with tools for building code in a particular coding language: kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\USD(JENKINS_SECRET) \USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate> Note Do not log in to the Jenkins console and change the pod template configuration. If you do so after the pod template is created, and the OpenShift Container Platform Sync plugin detects that the image associated with the image stream or image stream tag has changed, it replaces the pod template and overwrites those configuration changes. You cannot merge a new configuration with the existing configuration. Consider the config map approach if you have more complex configuration needs. After it is installed, the OpenShift Container Platform Sync plugin monitors the API server of OpenShift Container Platform for updates to image streams, image stream tags, and config maps and adjusts the configuration of the Kubernetes plugin. The following rules apply: Removing the label or annotation from the config map, image stream, or image stream tag deletes any existing PodTemplate from the configuration of the Kubernetes plugin. If those objects are removed, the corresponding configuration is removed from the Kubernetes plugin. If you create appropriately labeled or annotated ConfigMap , ImageStream , or ImageStreamTag objects, or add labels after their initial creation, this results in the creation of a PodTemplate in the Kubernetes-plugin configuration. In the case of the PodTemplate by config map form, changes to the config map data for the PodTemplate are applied to the PodTemplate settings in the Kubernetes plugin configuration. The changes also override any changes that were made to the PodTemplate through the Jenkins UI between changes to the config map. To use a container image as a Jenkins agent, the image must run the agent as an entry point. For more details, see the official Jenkins documentation . Additional resources Important changes to OpenShift Jenkins images 1.7. Jenkins permissions If in the config map the <serviceAccount> element of the pod template XML is the OpenShift Container Platform service account used for the resulting pod, the service account credentials are mounted into the pod. The permissions are associated with the service account and control which operations against the OpenShift Container Platform master are allowed from the pod. Consider the following scenario with service accounts used for the pod, which is launched by the Kubernetes Plugin that runs in the OpenShift Container Platform Jenkins image. If you use the example template for Jenkins that is provided by OpenShift Container Platform, the jenkins service account is defined with the edit role for the project Jenkins runs in, and the master Jenkins pod has that service account mounted. The two default Maven and NodeJS pod templates that are injected into the Jenkins configuration are also set to use the same service account as the Jenkins master. Any pod templates that are automatically discovered by the OpenShift Container Platform sync plugin because their image streams or image stream tags have the required label or annotations are configured to use the Jenkins master service account as their service account. For the other ways you can provide a pod template definition into Jenkins and the Kubernetes plugin, you have to explicitly specify the service account to use. Those other ways include the Jenkins console, the podTemplate pipeline DSL that is provided by the Kubernetes plugin, or labeling a config map whose data is the XML configuration for a pod template. If you do not specify a value for the service account, the default service account is used. Ensure that whatever service account is used has the necessary permissions, roles, and so on defined within OpenShift Container Platform to manipulate whatever projects you choose to manipulate from the within the pod. 1.8. Creating a Jenkins service from a template Templates provide parameter fields to define all the environment variables with predefined default values. OpenShift Container Platform provides templates to make creating a new Jenkins service easy. The Jenkins templates should be registered in the default openshift project by your cluster administrator during the initial cluster setup. The two available templates both define deployment configuration and a service. The templates differ in their storage strategy, which affects whether the Jenkins content persists across a pod restart. Note A pod might be restarted when it is moved to another node or when an update of the deployment configuration triggers a redeployment. jenkins-ephemeral uses ephemeral storage. On pod restart, all data is lost. This template is only useful for development or testing. jenkins-persistent uses a Persistent Volume (PV) store. Data survives a pod restart. To use a PV store, the cluster administrator must define a PV pool in the OpenShift Container Platform deployment. After you select which template you want, you must instantiate the template to be able to use Jenkins. Procedure Create a new Jenkins application using one of the following methods: A PV: USD oc new-app jenkins-persistent Or an emptyDir type volume where configuration does not persist across pod restarts: USD oc new-app jenkins-ephemeral With both templates, you can run oc describe on them to see all the parameters available for overriding. For example: USD oc describe jenkins-ephemeral 1.9. Using the Jenkins Kubernetes plugin In the following example, the openshift-jee-sample BuildConfig object causes a Jenkins Maven agent pod to be dynamically provisioned. The pod clones some Java source code, builds a WAR file, and causes a second BuildConfig , openshift-jee-sample-docker to run. The second BuildConfig layers the new WAR file into a container image. Important OpenShift Container Platform 4.11 removed the OpenShift Jenkins Maven and NodeJS Agent images from its payload. Red Hat no longer produces these images, and they are not available from the ocp-tools-4 repository at registry.redhat.io . Red Hat maintains the 4.10 and earlier versions of these images for any significant bug fixes or security CVEs, following the OpenShift Container Platform lifecycle policy . For more information, see the "Important changes to OpenShift Jenkins images" link in the following "Additional resources" section. Sample BuildConfig that uses the Jenkins Kubernetes plugin kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node("maven") { sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } triggers: - type: ConfigChange It is also possible to override the specification of the dynamically created Jenkins agent pod. The following is a modification to the preceding example, which overrides the container memory and specifies an environment variable. Sample BuildConfig that uses the Jenkins Kubernetes plugin, specifying memory limit and environment variable kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: "mypod", 1 cloud: "openshift", 2 inheritFrom: "maven", 3 containers: [ containerTemplate(name: "jnlp", 4 image: "openshift/jenkins-agent-maven-35-centos7:v3.10", 5 resourceRequestMemory: "512Mi", 6 resourceLimitMemory: "512Mi", 7 envVars: [ envVar(key: "CONTAINER_HEAP_PERCENT", value: "0.25") 8 ]) ]) { node("mypod") { 9 sh "git clone https://github.com/openshift/openshift-jee-sample.git ." sh "mvn -B -Popenshift package" sh "oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war" } } triggers: - type: ConfigChange 1 A new pod template called mypod is defined dynamically. The new pod template name is referenced in the node stanza. 2 The cloud value must be set to openshift . 3 The new pod template can inherit its configuration from an existing pod template. In this case, inherited from the Maven pod template that is pre-defined by OpenShift Container Platform. 4 This example overrides values in the pre-existing container, and must be specified by name. All Jenkins agent images shipped with OpenShift Container Platform use the Container name jnlp . 5 Specify the container image name again. This is a known issue. 6 A memory request of 512 Mi is specified. 7 A memory limit of 512 Mi is specified. 8 An environment variable CONTAINER_HEAP_PERCENT , with value 0.25 , is specified. 9 The node stanza references the name of the defined pod template. By default, the pod is deleted when the build completes. This behavior can be modified with the plugin or within a pipeline Jenkinsfile. Upstream Jenkins has more recently introduced a YAML declarative format for defining a podTemplate pipeline DSL in-line with your pipelines. An example of this format, using the sample java-builder pod template that is defined in the OpenShift Container Platform Jenkins image: def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml """ apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\USD(JENKINS_SECRET)', '\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true """ } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container("java") { sh "mvn --version" } } } } } Additional resources Important changes to OpenShift Jenkins images 1.10. Jenkins memory requirements When deployed by the provided Jenkins Ephemeral or Jenkins Persistent templates, the default memory limit is 1 Gi . By default, all other process that run in the Jenkins container cannot use more than a total of 512 MiB of memory. If they require more memory, the container halts. It is therefore highly recommended that pipelines run external commands in an agent container wherever possible. And if Project quotas allow for it, see recommendations from the Jenkins documentation on what a Jenkins master should have from a memory perspective. Those recommendations proscribe to allocate even more memory for the Jenkins master. It is recommended to specify memory request and limit values on agent containers created by the Jenkins Kubernetes plugin. Admin users can set default values on a per-agent image basis through the Jenkins configuration. The memory request and limit parameters can also be overridden on a per-container basis. You can increase the amount of memory available to Jenkins by overriding the MEMORY_LIMIT parameter when instantiating the Jenkins Ephemeral or Jenkins Persistent template. 1.11. Additional resources See Base image options for more information about the Red Hat Universal Base Images (UBI). Important changes to OpenShift Jenkins images
|
[
"podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8",
"oc describe serviceaccount jenkins",
"Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp",
"oc describe secret <secret name from above>",
"Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA",
"pluginId:pluginVersion",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/jenkins/images-other-jenkins
|
Chapter 6. Configuring Redis storage for rate limiting
|
Chapter 6. Configuring Redis storage for rate limiting To configure persistence for rate limit counters in a multicluster environment, you must configure the connection details for your shared Redis-based datastore. This datastore is used to persist shared rate limit counters for the Limitador component of Connectivity Link. Note You must configure connection details for your shared Redis-based datastore on each OpenShift cluster that you want to use Connectivity Link for rate limiting. Prerequisites See Chapter 1, Connectivity Link prerequisites and permissions . Procedure Set the following environment variable to your shared Redis-based instance URL: Ensure that you include the appropriate URI scheme for your environment: Secure Redis: rediss:// Standard Redis: redis:// Create a Secret resource for your Redis URL as follows: Update your Limitador custom resource to use the secret that you created as follows: Additional resources For details on how to set up your shared Redis-based datastore, see your Redis-compatible product documentation: Redis documentation . AWS ElastiCache (Redis OSS) User Guide . Dragonfly documentation .
|
[
"export REDIS_URL=rediss://user:[email protected]:10340",
"-n kuadrant-system create secret generic redis-config --from-literal=URL=USDREDIS_URL",
"patch limitador limitador --type=merge -n kuadrant-system -p ' spec: storage: redis: configSecretRef: name: redis-config '"
] |
https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/installing_connectivity_link_on_openshift/configure-redis_connectivity-link
|
Chapter 5. Managing user-owned OAuth access tokens
|
Chapter 5. Managing user-owned OAuth access tokens Users can review their own OAuth access tokens and delete any that are no longer needed. 5.1. Listing user-owned OAuth access tokens You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in. Procedure List all user-owned OAuth access tokens: USD oc get useroauthaccesstokens Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full List user-owned OAuth access tokens for a particular OAuth client: USD oc get useroauthaccesstokens --field-selector=clientName="console" Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full 5.2. Viewing the details of a user-owned OAuth access token You can view the details of a user-owned OAuth access token. Procedure Describe the details of a user-owned OAuth access token: USD oc describe useroauthaccesstokens <token_name> Example output Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none> 1 The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in. 2 The client name, which describes where the token originated from. 3 The value in seconds from the creation time before this token expires. 4 If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used. 5 The scopes for this token. 6 The user name associated with this token. 5.3. Deleting user-owned OAuth access tokens The oc logout command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed. Deleting an OAuth access token logs out the user from all sessions that use the token. Procedure Delete the user-owned OAuth access token: USD oc delete useroauthaccesstokens <token_name> Example output useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted 5.4. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml
|
[
"oc get useroauthaccesstokens",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc get useroauthaccesstokens --field-selector=clientName=\"console\"",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc describe useroauthaccesstokens <token_name>",
"Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>",
"oc delete useroauthaccesstokens <token_name>",
"useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/managing-oauth-access-tokens
|
Chapter 2. Maintaining VDO
|
Chapter 2. Maintaining VDO After deploying a VDO volume, you can perform certain tasks to maintain or optimize it. Some of the following tasks are required for the correct functioning of VDO volumes. Prerequisites VDO is installed and deployed. See Chapter 1, Deploying VDO . 2.1. Managing free space on VDO volumes VDO is a thinly provisioned block storage target. Because of that, you must actively monitor and manage space usage on VDO volumes. 2.1.1. The physical and logical size of a VDO volume VDO utilizes physical, available physical, and logical size in the following ways: Physical size This is the same size as the underlying block device. VDO uses this storage for: User data, which might be deduplicated and compressed VDO metadata, such as the UDS index Available physical size This is the portion of the physical size that VDO is able to use for user data It is equivalent to the physical size minus the size of the metadata, minus the remainder after dividing the volume into slabs by the given slab size. Logical Size This is the provisioned size that the VDO volume presents to applications. It is usually larger than the available physical size. If the --vdoLogicalSize option is not specified, then the provisioning of the logical volume is now provisioned to a 1:1 ratio. For example, if a VDO volume is put on top of a 20 GB block device, then 2.5 GB is reserved for the UDS index (if the default index size is used). The remaining 17.5 GB is provided for the VDO metadata and user data. As a result, the available storage to consume is not more than 17.5 GB, and can be less due to metadata that makes up the actual VDO volume. VDO currently supports any logical size up to 254 times the size of the physical volume with an absolute maximum logical size of 4PB. Figure 2.1. VDO disk organization In this figure, the VDO deduplicated storage target sits completely on top of the block device, meaning the physical size of the VDO volume is the same size as the underlying block device. Additional resources For more information about how much storage VDO metadata requires on block devices of different sizes, see Section 1.6.4, "Examples of VDO requirements by physical size" . 2.1.2. Thin provisioning in VDO VDO is a thinly provisioned block storage target. The amount of physical space that a VDO volume uses might differ from the size of the volume that is presented to users of the storage. You can make use of this disparity to save on storage costs. Out-of-space conditions Take care to avoid unexpectedly running out of storage space, if the data written does not achieve the expected rate of optimization. Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual storage), it becomes possible for file systems and applications to unexpectedly run out of space. For that reason, storage systems using VDO must provide you with a way of monitoring the size of the free pool on the VDO volume. You can determine the size of this free pool by using the vdostats utility. The default output of this utility lists information for all running VDO volumes in a format similar to the Linux df utility. For example: When the physical storage capacity of a VDO volume is almost full, VDO reports a warning in the system log, similar to the following: Note These warning messages appear only when the lvm2-monitor service is running. It is enabled by default. How to prevent out-of-space conditions If the size of free pool drops below a certain level, you can take action by: Deleting data. This reclaims space whenever the deleted data is not duplicated. Deleting data frees the space only after discards are issued. Adding physical storage Important Monitor physical space on your VDO volumes to prevent out-of-space situations. Running out of physical blocks might result in losing recently written, unacknowledged data on the VDO volume. Thin provisioning and the TRIM and DISCARD commands To benefit from the storage savings of thin provisioning, the physical storage layer needs to know when data is deleted. File systems that work with thinly provisioned storage send TRIM or DISCARD commands to inform the storage system when a logical block is no longer required. Several methods of sending the TRIM or DISCARD commands are available: With the discard mount option, the file systems can send these commands whenever a block is deleted. You can send the commands in a controlled manner by using utilities such as fstrim . These utilities tell the file system to detect which logical blocks are unused and send the information to the storage system in the form of a TRIM or DISCARD command. The need to use TRIM or DISCARD on unused blocks is not unique to VDO. Any thinly provisioned storage system has the same challenge. 2.1.3. Monitoring VDO This procedure describes how to obtain usage and efficiency information from a VDO volume. Prerequisites Install the VDO software. See Section 1.7, "Installing VDO" . Procedure Use the vdostats utility to get information about a VDO volume: Additional resources vdostats(8) man page on your system 2.1.4. Reclaiming space for VDO on file systems This procedure reclaims storage space on a VDO volume that hosts a file system. VDO cannot reclaim space unless file systems communicate that blocks are free using the DISCARD , TRIM , or UNMAP commands. Procedure If the file system on your VDO volume supports discard operations, enable them. See Chapter 5, Discarding unused blocks . For file systems that do not use DISCARD , TRIM , or UNMAP , you can manually reclaim free space. Store a file consisting of binary zeros to fill the free space and then delete that file. 2.1.5. Reclaiming space for VDO without a file system This procedure reclaims storage space on a VDO volume that is used as a block storage target without a file system. Procedure Use the blkdiscard utility. For example, a single VDO volume can be carved up into multiple subvolumes by deploying LVM on top of it. Before deprovisioning a logical volume, use the blkdiscard utility to free the space previously used by that logical volume. LVM supports the REQ_DISCARD command and forwards the requests to VDO at the appropriate logical block addresses in order to free the space. If you use other volume managers, they also need to support REQ_DISCARD , or equivalently, UNMAP for SCSI devices or TRIM for ATA devices. Additional resources blkdiscard(8) man page on your system 2.1.6. Reclaiming space for VDO on Fibre Channel or Ethernet network This procedure reclaims storage space on VDO volumes (or portions of volumes) that are provisioned to hosts on a Fibre Channel storage fabric or an Ethernet network using SCSI target frameworks such as LIO or SCST. Procedure SCSI initiators can use the UNMAP command to free space on thinly provisioned storage targets, but the SCSI target framework needs to be configured to advertise support for this command. This is typically done by enabling thin provisioning on these volumes. Verify support for UNMAP on Linux-based SCSI initiators by running the following command: In the output, verify that the Maximum unmap LBA count value is greater than zero. 2.2. Starting or stopping VDO volumes You can start or stop a given VDO volume, or all VDO volumes, and their associated UDS indexes. 2.2.1. Started and activated VDO volumes During the system boot, the vdo systemd unit automatically starts all VDO devices that are configured as activated . The vdo systemd unit is installed and enabled by default when the vdo package is installed. This unit automatically runs the vdo start --all command at system startup to bring up all activated VDO volumes. You can also create a VDO volume that does not start automatically by adding the --activate=disabled option to the vdo create command. The starting order Some systems might place LVM volumes both above VDO volumes and below them. On these systems, it is necessary to start services in the right order: The lower layer of LVM must start first. In most systems, starting this layer is configured automatically when the LVM package is installed. The vdo systemd unit must start then. Finally, additional scripts must run in order to start LVM volumes or other services on top of the running VDO volumes. How long it takes to stop a volume Stopping a VDO volume takes time based on the speed of your storage device and the amount of data that the volume needs to write: The volume always writes around 1GiB for every 1GiB of the UDS index. The volume additionally writes the amount of data equal to the block map cache size plus up to 8MiB per slab. The volume must finish processing all outstanding IO requests. 2.2.2. Starting a VDO volume This procedure starts a given VDO volume or all VDO volumes on your system. Procedure To start a given VDO volume, use: To start all VDO volumes, use: Additional resources vdo(8) man page on your system 2.2.3. Stopping a VDO volume This procedure stops a given VDO volume or all VDO volumes on your system. Procedure Stop the volume. To stop a given VDO volume, use: To stop all VDO volumes, use: Wait for the volume to finish writing data to the disk. Additional resources vdo(8) man page on your system 2.2.4. Additional resources If restarted after an unclean shutdown, VDO performs a rebuild to verify the consistency of its metadata and repairs it if necessary. See Section 2.5, "Recovering a VDO volume after an unclean shutdown" for more information about the rebuild process. 2.3. Automatically starting VDO volumes at system boot You can configure VDO volumes so that they start automatically at system boot. You can also disable the automatic start. 2.3.1. Started and activated VDO volumes During the system boot, the vdo systemd unit automatically starts all VDO devices that are configured as activated . The vdo systemd unit is installed and enabled by default when the vdo package is installed. This unit automatically runs the vdo start --all command at system startup to bring up all activated VDO volumes. You can also create a VDO volume that does not start automatically by adding the --activate=disabled option to the vdo create command. The starting order Some systems might place LVM volumes both above VDO volumes and below them. On these systems, it is necessary to start services in the right order: The lower layer of LVM must start first. In most systems, starting this layer is configured automatically when the LVM package is installed. The vdo systemd unit must start then. Finally, additional scripts must run in order to start LVM volumes or other services on top of the running VDO volumes. How long it takes to stop a volume Stopping a VDO volume takes time based on the speed of your storage device and the amount of data that the volume needs to write: The volume always writes around 1GiB for every 1GiB of the UDS index. The volume additionally writes the amount of data equal to the block map cache size plus up to 8MiB per slab. The volume must finish processing all outstanding IO requests. 2.3.2. Activating a VDO volume This procedure activates a VDO volume to enable it to start automatically. Procedure To activate a specific volume: To activate all volumes: Additional resources vdo(8) man page on your system 2.3.3. Deactivating a VDO volume This procedure deactivates a VDO volume to prevent it from starting automatically. Procedure To deactivate a specific volume: To deactivate all volumes: Additional resources vdo(8) man page on your system 2.4. Selecting a VDO write mode You can configure write mode for a VDO volume, based on what the underlying block device requires. By default, VDO selects write mode automatically. 2.4.1. VDO write modes VDO supports the following write modes: sync When VDO is in sync mode, the layers above it assume that a write command writes data to persistent storage. As a result, it is not necessary for the file system or application, for example, to issue FLUSH or force unit access (FUA) requests to cause the data to become persistent at critical points. VDO must be set to sync mode only when the underlying storage guarantees that data is written to persistent storage when the write command completes. That is, the storage must either have no volatile write cache, or have a write through cache. async When VDO is in async mode, VDO does not guarantee that the data is written to persistent storage when a write command is acknowledged. The file system or application must issue FLUSH or FUA requests to ensure data persistence at critical points in each transaction. VDO must be set to async mode if the underlying storage does not guarantee that data is written to persistent storage when the write command completes; that is, when the storage has a volatile write back cache. async-unsafe This mode has the same properties as async but it is not compliant with Atomicity, Consistency, Isolation, Durability (ACID). Compared to async , async-unsafe has a better performance. Warning When an application or a file system that assumes ACID compliance operates on top of the VDO volume, async-unsafe mode might cause unexpected data loss. auto The auto mode automatically selects sync or async based on the characteristics of each device. This is the default option. 2.4.2. The internal processing of VDO write modes The write modes for VDO are sync and async . The following information describes the operations of these modes. If the kvdo module is operating in synchronous ( synch ) mode: It temporarily writes the data in the request to the allocated block and then acknowledges the request. Once the acknowledgment is complete, an attempt is made to deduplicate the block by computing a MurmurHash-3 signature of the block data, which is sent to the VDO index. If the VDO index contains an entry for a block with the same signature, kvdo reads the indicated block and does a byte-by-byte comparison of the two blocks to verify that they are identical. If they are indeed identical, then kvdo updates its block map so that the logical block points to the corresponding physical block and releases the allocated physical block. If the VDO index did not contain an entry for the signature of the block being written, or the indicated block does not actually contain the same data, kvdo updates its block map to make the temporary physical block permanent. If kvdo is operating in asynchronous ( async ) mode: Instead of writing the data, it will immediately acknowledge the request. It will then attempt to deduplicate the block in same manner as described above. If the block turns out to be a duplicate, kvdo updates its block map and releases the allocated block. Otherwise, it writes the data in the request to the allocated block and updates the block map to make the physical block permanent. 2.4.3. Checking the write mode on a VDO volume This procedure lists the active write mode on a selected VDO volume. Procedure Use the following command to see the write mode used by a VDO volume: The output lists: The configured write policy , which is the option selected from sync , async , or auto The write policy , which is the particular write mode that VDO applied, that is either sync or async 2.4.4. Checking for a volatile cache This procedure determines if a block device has a volatile cache or not. You can use the information to choose between the sync and async VDO write modes. Procedure Use either of the following methods to determine if a device has a writeback cache: Read the /sys/block/ block-device /device/scsi_disk/ identifier /cache_type sysfs file. For example: Alternatively, you can find whether the above mentioned devices have a write cache or not in the kernel boot log: In the examples: Device sda indicates that it has a writeback cache. Use async mode for it. Device sdb indicates that it does not have a writeback cache. Use sync mode for it. You should configure VDO to use the sync write mode if the cache_type value is None or write through . 2.4.5. Setting a VDO write mode This procedure sets a write mode for a VDO volume, either for an existing one or when creating a new volume. Important Using an incorrect write mode might result in data loss after a power failure, a system crash, or any unexpected loss of contact with the disk. Prerequisites Determine which write mode is correct for your device. See Section 2.4.4, "Checking for a volatile cache" . Procedure You can set a write mode either on an existing VDO volume or when creating a new volume: To modify an existing VDO volume, use: To specify a write mode when creating a VDO volume, add the --writePolicy= sync|async|async-unsafe|auto option to the vdo create command. 2.5. Recovering a VDO volume after an unclean shutdown You can recover a VDO volume after an unclean shutdown to enable it to continue operating. The task is mostly automated. Additionally, you can clean up after a VDO volume was unsuccessfully created because of a failure in the process. 2.5.1. VDO write modes VDO supports the following write modes: sync When VDO is in sync mode, the layers above it assume that a write command writes data to persistent storage. As a result, it is not necessary for the file system or application, for example, to issue FLUSH or force unit access (FUA) requests to cause the data to become persistent at critical points. VDO must be set to sync mode only when the underlying storage guarantees that data is written to persistent storage when the write command completes. That is, the storage must either have no volatile write cache, or have a write through cache. async When VDO is in async mode, VDO does not guarantee that the data is written to persistent storage when a write command is acknowledged. The file system or application must issue FLUSH or FUA requests to ensure data persistence at critical points in each transaction. VDO must be set to async mode if the underlying storage does not guarantee that data is written to persistent storage when the write command completes; that is, when the storage has a volatile write back cache. async-unsafe This mode has the same properties as async but it is not compliant with Atomicity, Consistency, Isolation, Durability (ACID). Compared to async , async-unsafe has a better performance. Warning When an application or a file system that assumes ACID compliance operates on top of the VDO volume, async-unsafe mode might cause unexpected data loss. auto The auto mode automatically selects sync or async based on the characteristics of each device. This is the default option. 2.5.2. VDO volume recovery When a VDO volume restarts after an unclean shutdown, VDO performs the following actions: Verifies the consistency of the metadata on the volume. Rebuilds a portion of the metadata to repair it if necessary. Rebuilds are automatic and do not require user intervention. VDO might rebuild different writes depending on the active write mode: sync If VDO was running on synchronous storage and write policy was set to sync , all data written to the volume are fully recovered. async If the write policy was async , some writes might not be recovered if they were not made durable. This is done by sending VDO a FLUSH command or a write I/O tagged with the FUA (force unit access) flag. You can accomplish this from user mode by invoking a data integrity operation like fsync , fdatasync , sync , or umount . In either mode, some writes that were either unacknowledged or not followed by a flush might also be rebuilt. Automatic and manual recovery When a VDO volume enters recovering operating mode, VDO automatically rebuilds the unclean VDO volume after the it comes back online. This is called online recovery . If VDO cannot recover a VDO volume successfully, it places the volume in read-only operating mode that persists across volume restarts. You need to fix the problem manually by forcing a rebuild. Additional resources For more information about automatic and manual recovery and VDO operating modes, see Section 2.5.3, "VDO operating modes" . 2.5.3. VDO operating modes This section describes the modes that indicate whether a VDO volume is operating normally or is recovering from an error. You can display the current operating mode of a VDO volume using the vdostats --verbose device command. See the Operating mode attribute in the output. normal This is the default operating mode. VDO volumes are always in normal mode, unless either of the following states forces a different mode. A newly created VDO volume starts in normal mode. recovering When a VDO volume does not save all of its metadata before shutting down, it automatically enters recovering mode the time that it starts up. The typical reasons for entering this mode are sudden power loss or a problem from the underlying storage device. In recovering mode, VDO is fixing the references counts for each physical block of data on the device. Recovery usually does not take very long. The time depends on how large the VDO volume is, how fast the underlying storage device is, and how many other requests VDO is handling simultaneously. The VDO volume functions normally with the following exceptions: Initially, the amount of space available for write requests on the volume might be limited. As more of the metadata is recovered, more free space becomes available. Data written while the VDO volume is recovering might fail to deduplicate against data written before the crash if that data is in a portion of the volume that has not yet been recovered. VDO can compress data while recovering the volume. You can still read or overwrite compressed blocks. During an online recovery, certain statistics are unavailable: for example, blocks in use and blocks free . These statistics become available when the rebuild is complete. Response times for reads and writes might be slower than usual due to the ongoing recovery work You can safely shut down the VDO volume in recovering mode. If the recovery does not finish before shutting down, the device enters recovering mode again the time that it starts up. The VDO volume automatically exits recovering mode and moves to normal mode when it has fixed all the reference counts. No administrator action is necessary. For details, see Section 2.5.4, "Recovering a VDO volume online" . read-only When a VDO volume encounters a fatal internal error, it enters read-only mode. Events that might cause read-only mode include metadata corruption or the backing storage device becoming read-only. This mode is an error state. In read-only mode, data reads work normally but data writes always fail. The VDO volume stays in read-only mode until an administrator fixes the problem. You can safely shut down a VDO volume in read-only mode. The mode usually persists after the VDO volume is restarted. In rare cases, the VDO volume is not able to record the read-only state to the backing storage device. In these cases, VDO attempts to do a recovery instead. Once a volume is in read-only mode, there is no guarantee that data on the volume has not been lost or corrupted. In such cases, Red Hat recommends copying the data out of the read-only volume and possibly restoring the volume from backup. If the risk of data corruption is acceptable, it is possible to force an offline rebuild of the VDO volume metadata so the volume can be brought back online and made available. The integrity of the rebuilt data cannot be guaranteed. For details, see Section 2.5.5, "Forcing an offline rebuild of a VDO volume metadata" . 2.5.4. Recovering a VDO volume online This procedure performs an online recovery on a VDO volume to recover metadata after an unclean shutdown. Procedure If the VDO volume is not already started, start it: No additional steps are necessary. The recovery runs in the background. If you rely on volume statistics like blocks in use and blocks free , wait until they are available. 2.5.5. Forcing an offline rebuild of a VDO volume metadata This procedure performs a forced offline rebuild of a VDO volume metadata to recover after an unclean shutdown. Warning This procedure might cause data loss on the volume. Prerequisites The VDO volume is started. Procedure Check if the volume is in read-only mode. See the operating mode attribute in the command output: If the volume is not in read-only mode, it is not necessary to force an offline rebuild. Perform an online recovery as described in Section 2.5.4, "Recovering a VDO volume online" . Stop the volume if it is running: Restart the volume with the --forceRebuild option: 2.5.6. Removing an unsuccessfully created VDO volume This procedure cleans up a VDO volume in an intermediate state. A volume is left in an intermediate state if a failure occurs when creating the volume. This might happen when, for example: The system crashes Power fails The administrator interrupts a running vdo create command Procedure To clean up, remove the unsuccessfully created volume with the --force option: The --force option is required because the administrator might have caused a conflict by changing the system configuration since the volume was unsuccessfully created. Without the --force option, the vdo remove command fails with the following message: 2.6. Optimizing the UDS index You can configure certain settings of the UDS index to optimize it on your system. Important You cannot change the properties of the UDS index after creating the VDO volume. 2.6.1. Components of a VDO volume VDO uses a block device as a backing store, which can include an aggregation of physical storage consisting of one or more disks, partitions, or even flat files. When a storage management tool creates a VDO volume, VDO reserves volume space for the UDS index and VDO volume. The UDS index and the VDO volume interact together to provide deduplicated block storage. Figure 2.2. VDO disk organization The VDO solution consists of the following components: kvdo A kernel module that loads into the Linux Device Mapper layer provides a deduplicated, compressed, and thinly provisioned block storage volume. The kvdo module exposes a block device. You can access this block device directly for block storage or present it through a Linux file system, such as XFS or ext4. When kvdo receives a request to read a logical block of data from a VDO volume, it maps the requested logical block to the underlying physical block and then reads and returns the requested data. When kvdo receives a request to write a block of data to a VDO volume, it first checks whether the request is a DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions is true, kvdo updates its block map and acknowledges the request. Otherwise, VDO processes and optimizes the data. uds A kernel module that communicates with the Universal Deduplication Service (UDS) index on the volume and analyzes data for duplicates. For each new piece of data, UDS quickly determines if that piece is identical to any previously stored piece of data. If the index finds a match, the storage system can then internally reference the existing item to avoid storing the same information more than once. The UDS index runs inside the kernel as the uds kernel module. Command line tools For configuring and managing optimized storage. 2.6.2. The UDS index VDO uses a high-performance deduplication index called UDS to detect duplicate blocks of data as they are being stored. The UDS index provides the foundation of the VDO product. For each new piece of data, it quickly determines if that piece is identical to any previously stored piece of data. If the index finds match, the storage system can then internally reference the existing item to avoid storing the same information more than once. The UDS index runs inside the kernel as the uds kernel module. The deduplication window is the number of previously written blocks that the index remembers. The size of the deduplication window is configurable. For a given window size, the index requires a specific amount of RAM and a specific amount of disk space. The size of the window is usually determined by specifying the size of the index memory using the --indexMem=size option. VDO then determines the amount of disk space to use automatically. The UDS index consists of two parts: A compact representation is used in memory that contains at most one entry per unique block. An on-disk component that records the associated block names presented to the index as they occur, in order. UDS uses an average of 4 bytes per entry in memory, including cache. The on-disk component maintains a bounded history of data passed to UDS. UDS provides deduplication advice for data that falls within this deduplication window, containing the names of the most recently seen blocks. The deduplication window allows UDS to index data as efficiently as possible while limiting the amount of memory required to index large data repositories. Despite the bounded nature of the deduplication window, most datasets which have high levels of deduplication also exhibit a high degree of temporal locality - in other words, most deduplication occurs among sets of blocks that were written at about the same time. Furthermore, in general, data being written is more likely to duplicate data that was recently written than data that was written a long time ago. Therefore, for a given workload over a given time interval, deduplication rates will often be the same whether UDS indexes only the most recent data or all the data. Because duplicate data tends to exhibit temporal locality, it is rarely necessary to index every block in the storage system. Were this not so, the cost of index memory would outstrip the savings of reduced storage costs from deduplication. Index size requirements are more closely related to the rate of data ingestion. For example, consider a storage system with 100 TB of total capacity but with an ingestion rate of 1 TB per week. With a deduplication window of 4 TB, UDS can detect most redundancy among the data written within the last month. 2.6.3. Recommended UDS index configuration This section describes the recommended options to use with the UDS index, based on your intended use case. In general, Red Hat recommends using a sparse UDS index for all production use cases. This is an extremely efficient indexing data structure, requiring approximately one-tenth of a byte of RAM per block in its deduplication window. On disk, it requires approximately 72 bytes of disk space per block. The minimum configuration of this index uses 256 MB of RAM and approximately 25 GB of space on disk. To use this configuration, specify the --sparseIndex=enabled --indexMem=0.25 options to the vdo create command. This configuration results in a deduplication window of 2.5 TB (meaning it will remember a history of 2.5 TB). For most use cases, a deduplication window of 2.5 TB is appropriate for deduplicating storage pools that are up to 10 TB in size. The default configuration of the index, however, is to use a dense index. This index is considerably less efficient (by a factor of 10) in RAM, but it has much lower (also by a factor of 10) minimum required disk space, making it more convenient for evaluation in constrained environments. In general, a deduplication window that is one quarter of the physical size of a VDO volume is a recommended configuration. However, this is not an actual requirement. Even small deduplication windows (compared to the amount of physical storage) can find significant amounts of duplicate data in many use cases. Larger windows may also be used, but it in most cases, there will be little additional benefit to doing so. Additional resources Speak with your Red Hat Technical Account Manager representative for additional guidelines on tuning this important system parameter. 2.7. Enabling or disabling deduplication in VDO In some instances, you might want to temporarily disable deduplication of data being written to a VDO volume while still retaining the ability to read to and write from the volume. Disabling deduplication prevents subsequent writes from being deduplicated, but the data that was already deduplicated remains so. 2.7.1. Deduplication in VDO Deduplication is a technique for reducing the consumption of storage resources by eliminating multiple copies of duplicate blocks. Instead of writing the same data more than once, VDO detects each duplicate block and records it as a reference to the original block. VDO maintains a mapping from logical block addresses, which are used by the storage layer above VDO, to physical block addresses, which are used by the storage layer under VDO. After deduplication, multiple logical block addresses can be mapped to the same physical block address. These are called shared blocks. Block sharing is invisible to users of the storage, who read and write blocks as they would if VDO were not present. When a shared block is overwritten, VDO allocates a new physical block for storing the new block data to ensure that other logical block addresses that are mapped to the shared physical block are not modified. 2.7.2. Enabling deduplication on a VDO volume This procedure restarts the associated UDS index and informs the VDO volume that deduplication is active again. Note Deduplication is enabled by default. Procedure To restart deduplication on a VDO volume, use the following command: 2.7.3. Disabling deduplication on a VDO volume This procedure stops the associated UDS index and informs the VDO volume that deduplication is no longer active. Procedure To stop deduplication on a VDO volume, use the following command: You can also disable deduplication when creating a new VDO volume by adding the --deduplication=disabled option to the vdo create command. 2.8. Enabling or disabling compression in VDO VDO provides data compression. Disabling it can maximize performance and speed up processing of data that is unlikely to compress. Re-enabling it can increase space savings. 2.8.1. Compression in VDO In addition to block-level deduplication, VDO also provides inline block-level compression using the HIOPS CompressionTM technology. VDO volume compression is on by default. While deduplication is the optimal solution for virtual machine environments and backup applications, compression works very well with structured and unstructured file formats that do not typically exhibit block-level redundancy, such as log files and databases. Compression operates on blocks that have not been identified as duplicates. When VDO sees unique data for the first time, it compresses the data. Subsequent copies of data that have already been stored are deduplicated without requiring an additional compression step. The compression feature is based on a parallelized packaging algorithm that enables it to handle many compression operations at once. After first storing the block and responding to the requestor, a best-fit packing algorithm finds multiple blocks that, when compressed, can fit into a single physical block. After it is determined that a particular physical block is unlikely to hold additional compressed blocks, it is written to storage and the uncompressed blocks are freed and reused. By performing the compression and packaging operations after having already responded to the requestor, using compression imposes a minimal latency penalty. 2.8.2. Enabling compression on a VDO volume This procedure enables compression on a VDO volume to increase space savings. Note Compression is enabled by default. Procedure To start it again, use the following command: 2.8.3. Disabling compression on a VDO volume This procedure stops compression on a VDO volume to maximize performance or to speed processing of data that is unlikely to compress. Procedure To stop compression on an existing VDO volume, use the following command: Alternatively, you can disable compression by adding the --compression=disabled option to the vdo create command when creating a new volume. 2.9. Increasing the size of a VDO volume You can increase the physical size of a VDO volume to utilize more underlying storage capacity, or the logical size to provide more capacity on the volume. 2.9.1. The physical and logical size of a VDO volume VDO utilizes physical, available physical, and logical size in the following ways: Physical size This is the same size as the underlying block device. VDO uses this storage for: User data, which might be deduplicated and compressed VDO metadata, such as the UDS index Available physical size This is the portion of the physical size that VDO is able to use for user data It is equivalent to the physical size minus the size of the metadata, minus the remainder after dividing the volume into slabs by the given slab size. Logical Size This is the provisioned size that the VDO volume presents to applications. It is usually larger than the available physical size. If the --vdoLogicalSize option is not specified, then the provisioning of the logical volume is now provisioned to a 1:1 ratio. For example, if a VDO volume is put on top of a 20 GB block device, then 2.5 GB is reserved for the UDS index (if the default index size is used). The remaining 17.5 GB is provided for the VDO metadata and user data. As a result, the available storage to consume is not more than 17.5 GB, and can be less due to metadata that makes up the actual VDO volume. VDO currently supports any logical size up to 254 times the size of the physical volume with an absolute maximum logical size of 4PB. Figure 2.3. VDO disk organization In this figure, the VDO deduplicated storage target sits completely on top of the block device, meaning the physical size of the VDO volume is the same size as the underlying block device. Additional resources For more information about how much storage VDO metadata requires on block devices of different sizes, see Section 1.6.4, "Examples of VDO requirements by physical size" . 2.9.2. Thin provisioning in VDO VDO is a thinly provisioned block storage target. The amount of physical space that a VDO volume uses might differ from the size of the volume that is presented to users of the storage. You can make use of this disparity to save on storage costs. Out-of-space conditions Take care to avoid unexpectedly running out of storage space, if the data written does not achieve the expected rate of optimization. Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual storage), it becomes possible for file systems and applications to unexpectedly run out of space. For that reason, storage systems using VDO must provide you with a way of monitoring the size of the free pool on the VDO volume. You can determine the size of this free pool by using the vdostats utility. The default output of this utility lists information for all running VDO volumes in a format similar to the Linux df utility. For example: When the physical storage capacity of a VDO volume is almost full, VDO reports a warning in the system log, similar to the following: Note These warning messages appear only when the lvm2-monitor service is running. It is enabled by default. How to prevent out-of-space conditions If the size of free pool drops below a certain level, you can take action by: Deleting data. This reclaims space whenever the deleted data is not duplicated. Deleting data frees the space only after discards are issued. Adding physical storage Important Monitor physical space on your VDO volumes to prevent out-of-space situations. Running out of physical blocks might result in losing recently written, unacknowledged data on the VDO volume. Thin provisioning and the TRIM and DISCARD commands To benefit from the storage savings of thin provisioning, the physical storage layer needs to know when data is deleted. File systems that work with thinly provisioned storage send TRIM or DISCARD commands to inform the storage system when a logical block is no longer required. Several methods of sending the TRIM or DISCARD commands are available: With the discard mount option, the file systems can send these commands whenever a block is deleted. You can send the commands in a controlled manner by using utilities such as fstrim . These utilities tell the file system to detect which logical blocks are unused and send the information to the storage system in the form of a TRIM or DISCARD command. The need to use TRIM or DISCARD on unused blocks is not unique to VDO. Any thinly provisioned storage system has the same challenge. 2.9.3. Increasing the logical size of a VDO volume This procedure increases the logical size of a given VDO volume. It enables you to initially create VDO volumes that have a logical size small enough to be safe from running out of space. After some period of time, you can evaluate the actual rate of data reduction, and if sufficient, you can grow the logical size of the VDO volume to take advantage of the space savings. It is not possible to decrease the logical size of a VDO volume. Procedure To grow the logical size, use: When the logical size increases, VDO informs any devices or file systems on top of the volume of the new size. 2.9.4. Increasing the physical size of a VDO volume This procedure increases the amount of physical storage available to a VDO volume. It is not possible to shrink a VDO volume in this way. Prerequisites The underlying block device has a larger capacity than the current physical size of the VDO volume. If it does not, you can attempt to increase the size of the device. The exact procedure depends on the type of the device. For example, to resize an MBR or GPT partition, see the Resizing a partition section in the Managing storage devices guide. Procedure Add the new physical storage space to the VDO volume: 2.10. Removing VDO volumes You can remove an existing VDO volume on your system. 2.10.1. Removing a working VDO volume This procedure removes a VDO volume and its associated UDS index. Procedure Unmount the file systems and stop the applications that are using the storage on the VDO volume. To remove the VDO volume from your system, use: 2.10.2. Removing an unsuccessfully created VDO volume This procedure cleans up a VDO volume in an intermediate state. A volume is left in an intermediate state if a failure occurs when creating the volume. This might happen when, for example: The system crashes Power fails The administrator interrupts a running vdo create command Procedure To clean up, remove the unsuccessfully created volume with the --force option: The --force option is required because the administrator might have caused a conflict by changing the system configuration since the volume was unsuccessfully created. Without the --force option, the vdo remove command fails with the following message: 2.11. Additional resources You can use the Ansible tool to automate VDO deployment and administration. For details, see: Ansible documentation: https://docs.ansible.com/ VDO Ansible module documentation: https://docs.ansible.com/ansible/latest/modules/vdo_module.html
|
[
"Device 1K-blocks Used Available Use% /dev/mapper/ vdo-name 211812352 105906176 105906176 50%",
"Oct 2 17:13:39 system lvm[13863]: Monitoring VDO pool vdo-name . Oct 2 17:27:39 system lvm[13863]: WARNING: VDO pool vdo-name is now 80.69% full. Oct 2 17:28:19 system lvm[13863]: WARNING: VDO pool vdo-name is now 85.25% full. Oct 2 17:29:39 system lvm[13863]: WARNING: VDO pool vdo-name is now 90.64% full. Oct 2 17:30:29 system lvm[13863]: WARNING: VDO pool vdo-name is now 96.07% full.",
"vdostats --human-readable Device 1K-blocks Used Available Use% Space saving% /dev/mapper/node1osd1 926.5G 21.0G 905.5G 2% 73% /dev/mapper/node1osd2 926.5G 28.2G 898.3G 3% 64%",
"sg_vpd --page=0xb0 /dev/ device",
"vdo start --name= my-vdo",
"vdo start --all",
"vdo stop --name= my-vdo",
"vdo stop --all",
"vdo activate --name= my-vdo",
"vdo activate --all",
"vdo deactivate --name= my-vdo",
"vdo deactivate --all",
"vdo status --name= my-vdo",
"cat '/sys/block/sda/device/scsi_disk/7:0:0:0/cache_type' write back",
"cat '/sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type' None",
"sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, does not support DPO or FUA sd 1:2:0:0: [sdb] Write cache: disabled, read cache: disabled, supports DPO and FUA",
"vdo changeWritePolicy --writePolicy= sync|async|async-unsafe|auto --name= vdo-name",
"vdo start --name= my-vdo",
"vdo status --name= my-vdo",
"vdo stop --name= my-vdo",
"vdo start --name= my-vdo --forceRebuild",
"vdo remove --force --name= my-vdo",
"[...] A previous operation failed. Recovery from the failure either failed or was interrupted. Add '--force' to 'remove' to perform the following cleanup. Steps to clean up VDO my-vdo : umount -f /dev/mapper/ my-vdo udevadm settle dmsetup remove my-vdo vdo: ERROR - VDO volume my-vdo previous operation (create) is incomplete",
"vdo enableDeduplication --name=my-vdo",
"vdo disableDeduplication --name=my-vdo",
"vdo enableCompression --name= my-vdo",
"vdo disableCompression --name= my-vdo",
"Device 1K-blocks Used Available Use% /dev/mapper/ vdo-name 211812352 105906176 105906176 50%",
"Oct 2 17:13:39 system lvm[13863]: Monitoring VDO pool vdo-name . Oct 2 17:27:39 system lvm[13863]: WARNING: VDO pool vdo-name is now 80.69% full. Oct 2 17:28:19 system lvm[13863]: WARNING: VDO pool vdo-name is now 85.25% full. Oct 2 17:29:39 system lvm[13863]: WARNING: VDO pool vdo-name is now 90.64% full. Oct 2 17:30:29 system lvm[13863]: WARNING: VDO pool vdo-name is now 96.07% full.",
"vdo growLogical --name= my-vdo --vdoLogicalSize= new-logical-size",
"vdo growPhysical --name= my-vdo",
"vdo remove --name= my-vdo",
"vdo remove --force --name= my-vdo",
"[...] A previous operation failed. Recovery from the failure either failed or was interrupted. Add '--force' to 'remove' to perform the following cleanup. Steps to clean up VDO my-vdo : umount -f /dev/mapper/ my-vdo udevadm settle dmsetup remove my-vdo vdo: ERROR - VDO volume my-vdo previous operation (create) is incomplete"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deduplicating_and_compressing_storage/maintaining-vdo_deduplicating-and-compressing-storage
|
Chapter 7. Uninstalling a cluster on Azure Stack Hub
|
Chapter 7. Uninstalling a cluster on Azure Stack Hub You can remove a cluster that you deployed to Azure Stack Hub. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
|
[
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure_stack_hub/uninstalling-cluster-azure-stack-hub
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.