title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Setting up OpenShift Pipelines in the web console to view Software Supply Chain Security elements
Chapter 2. Setting up OpenShift Pipelines in the web console to view Software Supply Chain Security elements Use the Developer or Administrator perspective to create or modify a pipeline and view key Software Supply Chain Security elements within a project. Set up OpenShift Pipelines to view: Project vulnerabilities : Visual representation of identified vulnerabilities within a project. Software Bill of Materials (SBOMs) : Download or view detailed listing of PipelineRun components. Additionally, PipelineRuns that meet Tekton Chains requirement displays signed badges to their names. This badge indicates that the pipeline run execution results are cryptographically signed and stored securely, for example within an OCI image. Figure 2.1. The signed badge The PipelineRun displays the signed badge to its name only if you have configured Tekton Chains. For information on configuring Tekton Chains, see Using Tekton Chains for OpenShift Pipelines supply chain security . 2.1. Setting up OpenShift Pipelines to view project vulnerabilities The PipelineRun details page provides a visual representation of identified vulnerabilities, categorized by the severity (critical, high, medium, and low). This streamlined view facilitates prioritization and remediation efforts. Figure 2.2. Viewing vulnerabilities on the PipelineRun details page You can also review the vulnerabilities in the Vulnerabilities column in the pipeline run list view page. Figure 2.3. Viewing vulnerabilities on the PipelineRun list view Note Visual representation of identified vulnerabilities is available starting from the OpenShift Container Platform version 4.15 release. Prerequisites You have logged in to the web console . You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. You have an existing vulnerability scan task. Procedures In the Developer or Administrator perspective, switch to the relevant project where you want a visual representation of vulnerabilities. Update your existing vulnerability scan task to ensure that it stores the output in the .json file and then extracts the vulnerability summary in the following format: # The format to extract vulnerability summary (adjust the jq command for different JSON structures). jq -rce \ '{vulnerabilities:{ critical: (.result.summary.CRITICAL), high: (.result.summary.IMPORTANT), medium: (.result.summary.MODERATE), low: (.result.summary.LOW) }}' scan_output.json | tee USD(results.SCAN_OUTPUT.path) Note You might need to adjust the jq command for different JSON structures. (Optional) If you do not have a vulnerability scan task, create one in the following format: Example vulnerability scan task using Roxctl apiVersion: tekton.dev/v1 kind: Task metadata: name: vulnerability-scan 1 annotations: task.output.location: results 2 task.results.format: application/json task.results.key: SCAN_OUTPUT 3 spec: results: - description: CVE result format 4 name: SCAN_OUTPUT steps: - name: roxctl 5 image: quay.io/roxctl-tool-image 6 env: - name: ENV_VAR_NAME_1 7 valueFrom: secretKeyRef: key: secret_key_1 name: secret_name_1 env: - name: ENV_VAR_NAME_2 valueFrom: secretKeyRef: key: secret_key_2 name: secret_name_2 script: | 8 #!/bin/sh # Sample shell script echo "ENV_VAR_NAME_1: " USDENV_VAR_NAME_1 echo "ENV_VAR_NAME_2: " USDENV_VAR_NAME_2 jq --version (adjust the jq command for different JSON structures) curl -k -L -H "Authorization: Bearer USDENV_VAR_NAME_1" https://USDENV_VAR_NAME_2/api/cli/download/roxctl-linux --output ./roxctl chmod +x ./roxctl echo "roxctl version" ./roxctl version echo "image from pipeline: " # Replace the following line with your dynamic image logic DYNAMIC_IMAGE=USD(get_dynamic_image_logic_here) echo "Dynamic image: USDDYNAMIC_IMAGE" ./roxctl image scan --insecure-skip-tls-verify -e USDENV_VAR_NAME_2 --image USDDYNAMIC_IMAGE --output json > roxctl_output.json more roxctl_output.json jq -rce \ 9 '{vulnerabilities:{ critical: (.result.summary.CRITICAL), high: (.result.summary.IMPORTANT), medium: (.result.summary.MODERATE), low: (.result.summary.LOW) }}' scan_output.json | tee USD(results.SCAN_OUTPUT.path) 1 The name of your task. 2 The location for storing the task outputs. 3 The naming convention of the scan task result. A valid naming convention must end with the SCAN_OUTPUT string. For example, SCAN_OUTPUT, MY_CUSTOM_SCAN_OUTPUT, or ACS_SCAN_OUTPUT. 4 The description of the result. 5 The name of the vulnerability scanning tool that you have used. 6 The location of the actual image containing the scan tool. 7 The tool-specific environment variables. 8 The shell script to be executed with json output. For example, scan_output.json. 9 The format to extract vulnerability summary (adjust jq command for different JSON structures). Note This is an example configuration. Modify the values according to your specific scanning tool to set results in the expected format. Update an appropriate Pipeline to add vulnerabilities specifications in the following format: ... spec: results: - description: The common vulnerabilities and exposures (CVE) result name: SCAN_OUTPUT value: USD(tasks.vulnerability-scan.results.SCAN_OUTPUT) Verification Navigate to the PipelineRun details page and review the Vulnerabilities row for a visual representation of identified vulnerabilities. Alternatively, you can navigate to the PipelineRun list view page, and review the Vulnerabilities column. 2.2. Setting up OpenShift Pipelines to download or view SBOMs The PipelineRun details page provides an option to download or view Software Bill of Materials (SBOMs), enhancing transparency and control within your supply chain. SBOMs lists all the software libraries that a component uses. Those libraries can enable specific functionality or facilitate development. You can use an SBOM to better understand the composition of your software, identify vulnerabilities, and assess the potential impact of any security issues that might arise. Figure 2.4. Options to download or view SBOMs Prerequisites You have logged in to the web console . You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer or Administrator perspective, switch to the relevant project where you want a visual representation of SBOMs. Add a task in the following format to view or download the SBOM information: Example SBOM task apiVersion: tekton.dev/v1 kind: Task metadata: name: sbom-task 1 annotations: task.output.location: results 2 task.results.format: application/text task.results.key: LINK_TO_SBOM 3 task.results.type: external-link 4 spec: results: - description: Contains the SBOM link 5 name: LINK_TO_SBOM steps: - name: print-sbom-results image: quay.io/image 6 script: | 7 #!/bin/sh syft version syft quay.io/<username>/quarkus-demo:v2 --output cyclonedx-json=sbom-image.json echo 'BEGIN SBOM' cat sbom-image.json echo 'END SBOM' echo 'quay.io/user/workloads/<namespace>/node-express/node-express:build-8e536-1692702836' | tee USD(results.LINK_TO_SBOM.path) 8 1 The name of your task. 2 The location for storing the task outputs. 3 The SBOM task result name. Do not change the name of the SBOM result task. 4 (Optional) Set to open the SBOM in a new tab. 5 The description of the result. 6 The image that generates the SBOM. 7 The script that generates the SBOM image. 8 The SBOM image along with the path name. Update the Pipeline to reference the newly created SBOM task. ... spec: tasks: - name: sbom-task taskRef: name: sbom-task 1 results: - name: IMAGE_URL 2 description: url value: <oci_image_registry_url> 3 1 The same name as created in Step 2. 2 The name of the result. 3 The OCI image repository URL which contains the .sbom images. Rerun the affected OpenShift Pipeline. 2.2.1. Viewing an SBOM in the web UI Prerequisites You have set up OpenShift Pipelines to download or view SBOMs. Procedure Navigate to the Activity PipelineRuns tab. For the project whose SBOM you want to view, select its most recent pipeline run. On the PipelineRun details page, select View SBOM . You can use your web browser to immediately search the SBOM for terms that indicate vulnerabilities in your software supply chain. For example, try searching for log4j . You can select Download to download the SBOM, or Expand to view it full-screen. 2.2.2. Downloading an SBOM in the CLI Prerequisites You have installed the Cosign CLI tool. For information about installing the Cosign tool, see the Sigstore documentation for Cosign . You have set up OpenShift Pipelines to download or view SBOMs. Procedure Open terminal, log in to Developer or Administrator perspective, and then switch to the relevant project. From the OpenShift web console, copy the download sbom command and run it on your terminal. Example cosign command USD cosign download sbom quay.io/<workspace>/user-workload@sha256 (Optional) To view the full SBOM in a searchable format, run the following command to redirect the output: Example cosign command USD cosign download sbom quay.io/<workspace>/user-workload@sha256 > sbom.txt 2.2.3. Reading the SBOM In the SBOM, as the following sample excerpt shows, you can see four characteristics of each library that a project uses: Its author or publisher Its name Its version Its licenses This information helps you verify that individual libraries are safely-sourced, updated, and compliant. Example SBOM { "bomFormat": "CycloneDX", "specVersion": "1.4", "serialNumber": "urn:uuid:89146fc4-342f-496b-9cc9-07a6a1554220", "version": 1, "metadata": { ... }, "components": [ { "bom-ref": "pkg:pypi/[email protected]?package-id=d6ad7ed5aac04a8", "type": "library", "author": "Armin Ronacher <[email protected]>", "name": "Flask", "version": "2.1.0", "licenses": [ { "license": { "id": "BSD-3-Clause" } } ], "cpe": "cpe:2.3:a:armin-ronacher:python-Flask:2.1.0:*:*:*:*:*:*:*", "purl": "pkg:pypi/[email protected]", "properties": [ { "name": "syft:package:foundBy", "value": "python-package-cataloger" ... 2.3. Additional resources Working with Red Hat OpenShift Pipelines in the web console
[ "The format to extract vulnerability summary (adjust the jq command for different JSON structures). jq -rce '{vulnerabilities:{ critical: (.result.summary.CRITICAL), high: (.result.summary.IMPORTANT), medium: (.result.summary.MODERATE), low: (.result.summary.LOW) }}' scan_output.json | tee USD(results.SCAN_OUTPUT.path)", "apiVersion: tekton.dev/v1 kind: Task metadata: name: vulnerability-scan 1 annotations: task.output.location: results 2 task.results.format: application/json task.results.key: SCAN_OUTPUT 3 spec: results: - description: CVE result format 4 name: SCAN_OUTPUT steps: - name: roxctl 5 image: quay.io/roxctl-tool-image 6 env: - name: ENV_VAR_NAME_1 7 valueFrom: secretKeyRef: key: secret_key_1 name: secret_name_1 env: - name: ENV_VAR_NAME_2 valueFrom: secretKeyRef: key: secret_key_2 name: secret_name_2 script: | 8 #!/bin/sh # Sample shell script echo \"ENV_VAR_NAME_1: \" USDENV_VAR_NAME_1 echo \"ENV_VAR_NAME_2: \" USDENV_VAR_NAME_2 jq --version (adjust the jq command for different JSON structures) curl -k -L -H \"Authorization: Bearer USDENV_VAR_NAME_1\" https://USDENV_VAR_NAME_2/api/cli/download/roxctl-linux --output ./roxctl chmod +x ./roxctl echo \"roxctl version\" ./roxctl version echo \"image from pipeline: \" # Replace the following line with your dynamic image logic DYNAMIC_IMAGE=USD(get_dynamic_image_logic_here) echo \"Dynamic image: USDDYNAMIC_IMAGE\" ./roxctl image scan --insecure-skip-tls-verify -e USDENV_VAR_NAME_2 --image USDDYNAMIC_IMAGE --output json > roxctl_output.json more roxctl_output.json jq -rce \\ 9 '{vulnerabilities:{ critical: (.result.summary.CRITICAL), high: (.result.summary.IMPORTANT), medium: (.result.summary.MODERATE), low: (.result.summary.LOW) }}' scan_output.json | tee USD(results.SCAN_OUTPUT.path)", "spec: results: - description: The common vulnerabilities and exposures (CVE) result name: SCAN_OUTPUT value: USD(tasks.vulnerability-scan.results.SCAN_OUTPUT)", "apiVersion: tekton.dev/v1 kind: Task metadata: name: sbom-task 1 annotations: task.output.location: results 2 task.results.format: application/text task.results.key: LINK_TO_SBOM 3 task.results.type: external-link 4 spec: results: - description: Contains the SBOM link 5 name: LINK_TO_SBOM steps: - name: print-sbom-results image: quay.io/image 6 script: | 7 #!/bin/sh syft version syft quay.io/<username>/quarkus-demo:v2 --output cyclonedx-json=sbom-image.json echo 'BEGIN SBOM' cat sbom-image.json echo 'END SBOM' echo 'quay.io/user/workloads/<namespace>/node-express/node-express:build-8e536-1692702836' | tee USD(results.LINK_TO_SBOM.path) 8", "spec: tasks: - name: sbom-task taskRef: name: sbom-task 1 results: - name: IMAGE_URL 2 description: url value: <oci_image_registry_url> 3", "cosign download sbom quay.io/<workspace>/user-workload@sha256", "cosign download sbom quay.io/<workspace>/user-workload@sha256 > sbom.txt", "{ \"bomFormat\": \"CycloneDX\", \"specVersion\": \"1.4\", \"serialNumber\": \"urn:uuid:89146fc4-342f-496b-9cc9-07a6a1554220\", \"version\": 1, \"metadata\": { }, \"components\": [ { \"bom-ref\": \"pkg:pypi/[email protected]?package-id=d6ad7ed5aac04a8\", \"type\": \"library\", \"author\": \"Armin Ronacher <[email protected]>\", \"name\": \"Flask\", \"version\": \"2.1.0\", \"licenses\": [ { \"license\": { \"id\": \"BSD-3-Clause\" } } ], \"cpe\": \"cpe:2.3:a:armin-ronacher:python-Flask:2.1.0:*:*:*:*:*:*:*\", \"purl\": \"pkg:pypi/[email protected]\", \"properties\": [ { \"name\": \"syft:package:foundBy\", \"value\": \"python-package-cataloger\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/securing_openshift_pipelines/setting-up-openshift-pipelines-to-view-software-supply-chain-security-elements
Chapter 16. Performing rolling upgrades for Data Grid Server clusters
Chapter 16. Performing rolling upgrades for Data Grid Server clusters Perform rolling upgrades of your Data Grid clusters to change between versions without downtime or data loss and migrate data over the Hot Rod protocol. 16.1. Setting up target Data Grid clusters Create a cluster that uses the Data Grid version to which you plan to upgrade and then connect the source cluster to the target cluster using a remote cache store. Prerequisites Install Data Grid Server nodes with the desired version for your target cluster. Important Ensure the network properties for the target cluster do not overlap with those for the source cluster. You should specify unique names for the target and source clusters in the JGroups transport configuration. Depending on your environment you can also use different network interfaces and port offsets to separate the target and source clusters. Procedure Create a remote cache store configuration, in JSON format, that allows the target cluster to connect to the source cluster. Remote cache stores on the target cluster use the Hot Rod protocol to retrieve data from the source cluster. { "remote-store": { "cache": "myCache", "shared": true, "raw-values": true, "security": { "authentication": { "digest": { "username": "username", "password": "changeme", "realm": "default" } } }, "remote-server": [ { "host": "127.0.0.1", "port": 12222 } ] } } Use the Data Grid Command Line Interface (CLI) or REST API to add the remote cache store configuration to the target cluster so it can connect to the source cluster. CLI: Use the migrate cluster connect command on the target cluster. REST API: Invoke a POST request that includes the remote store configuration in the payload with the rolling-upgrade/source-connection method. Repeat the preceding step for each cache that you want to migrate. Switch clients over to the target cluster, so it starts handling all requests. Update client configuration with the location of the target cluster. Restart clients. Important If you need to migrate Indexed caches you must first migrate the internal ___protobuf_metadata cache so that the .proto schemas defined on the source cluster will also be present on the target cluster. Additional resources Remote cache store configuration schema 16.2. Synchronizing data to target clusters When you set up a target Data Grid cluster and connect it to a source cluster, the target cluster can handle client requests using a remote cache store and load data on demand. To completely migrate data to the target cluster, so you can decommission the source cluster, you can synchronize data. This operation reads data from the source cluster and writes it to the target cluster. Data migrates to all nodes in the target cluster in parallel, with each node receiving a subset of the data. You must perform the synchronization for each cache that you want to migrate to the target cluster. Prerequisites Set up a target cluster with the appropriate Data Grid version. Procedure Start synchronizing each cache that you want to migrate to the target cluster with the Data Grid Command Line Interface (CLI) or REST API. CLI: Use the migrate cluster synchronize command. REST API: Use the ?action=sync-data parameter with a POST request. When the operation completes, Data Grid responds with the total number of entries copied to the target cluster. Disconnect each node in the target cluster from the source cluster. CLI: Use the migrate cluster disconnect command. REST API: Invoke a DELETE request. steps After you synchronize all data from the source cluster, the rolling upgrade process is complete. You can now decommission the source cluster.
[ "{ \"remote-store\": { \"cache\": \"myCache\", \"shared\": true, \"raw-values\": true, \"security\": { \"authentication\": { \"digest\": { \"username\": \"username\", \"password\": \"changeme\", \"realm\": \"default\" } } }, \"remote-server\": [ { \"host\": \"127.0.0.1\", \"port\": 12222 } ] } }", "[//containers/default]> migrate cluster connect -c myCache --file=remote-store.json", "POST /rest/v2/caches/myCache/rolling-upgrade/source-connection", "migrate cluster synchronize -c myCache", "POST /rest/v2/caches/myCache?action=sync-data", "migrate cluster disconnect -c myCache", "DELETE /rest/v2/caches/myCache/rolling-upgrade/source-connection" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/rolling-upgrades
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_red_hat_amq_broker_7.9/making_open_source_more_inclusive
Chapter 17. Syncing LDAP groups
Chapter 17. Syncing LDAP groups As an administrator, you can use groups to manage users, change their permissions, and enhance collaboration. Your organization may have already created user groups and stored them in an LDAP server. OpenShift Container Platform can sync those LDAP records with internal OpenShift Container Platform records, enabling you to manage your groups in one place. OpenShift Container Platform currently supports group sync with LDAP servers using three common schemas for defining group membership: RFC 2307, Active Directory, and augmented Active Directory. For more information on configuring LDAP, see Configuring an LDAP identity provider . Note You must have cluster-admin privileges to sync groups. 17.1. About configuring LDAP sync Before you can run LDAP sync, you need a sync configuration file. This file contains the following LDAP client configuration details: Configuration for connecting to your LDAP server. Sync configuration options that are dependent on the schema used in your LDAP server. An administrator-defined list of name mappings that maps OpenShift Container Platform group names to groups in your LDAP server. The format of the configuration file depends upon the schema you are using: RFC 2307, Active Directory, or augmented Active Directory. LDAP client configuration The LDAP client configuration section of the configuration defines the connections to your LDAP server. The LDAP client configuration section of the configuration defines the connections to your LDAP server. LDAP client configuration url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5 1 The connection protocol, IP address of the LDAP server hosting your database, and the port to connect to, formatted as scheme://host:port . 2 Optional distinguished name (DN) to use as the Bind DN. OpenShift Container Platform uses this if elevated privilege is required to retrieve entries for the sync operation. 3 Optional password to use to bind. OpenShift Container Platform uses this if elevated privilege is necessary to retrieve entries for the sync operation. This value may also be provided in an environment variable, external file, or encrypted file. 4 When false , secure LDAP ( ldaps:// ) URLs connect using TLS, and insecure LDAP ( ldap:// ) URLs are upgraded to TLS. When true , no TLS connection is made to the server and you cannot use ldaps:// URL schemes. 5 The certificate bundle to use for validating server certificates for the configured URL. If empty, OpenShift Container Platform uses system-trusted roots. This only applies if insecure is set to false . LDAP query definition Sync configurations consist of LDAP query definitions for the entries that are required for synchronization. The specific definition of an LDAP query depends on the schema used to store membership information in the LDAP server. LDAP query definition baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6 1 The distinguished name (DN) of the branch of the directory where all searches will start from. It is required that you specify the top of your directory tree, but you can also specify a subtree in the directory. 2 The scope of the search. Valid values are base , one , or sub . If this is left undefined, then a scope of sub is assumed. Descriptions of the scope options can be found in the table below. 3 The behavior of the search with respect to aliases in the LDAP tree. Valid values are never , search , base , or always . If this is left undefined, then the default is to always dereference aliases. Descriptions of the dereferencing behaviors can be found in the table below. 4 The time limit allowed for the search by the client, in seconds. A value of 0 imposes no client-side limit. 5 A valid LDAP search filter. If this is left undefined, then the default is (objectClass=*) . 6 The optional maximum size of response pages from the server, measured in LDAP entries. If set to 0 , no size restrictions will be made on pages of responses. Setting paging sizes is necessary when queries return more entries than the client or server allow by default. Table 17.1. LDAP search scope options LDAP search scope Description base Only consider the object specified by the base DN given for the query. one Consider all of the objects on the same level in the tree as the base DN for the query. sub Consider the entire subtree rooted at the base DN given for the query. Table 17.2. LDAP dereferencing behaviors Dereferencing behavior Description never Never dereference any aliases found in the LDAP tree. search Only dereference aliases found while searching. base Only dereference aliases while finding the base object. always Always dereference all aliases found in the LDAP tree. User-defined name mapping A user-defined name mapping explicitly maps the names of OpenShift Container Platform groups to unique identifiers that find groups on your LDAP server. The mapping uses normal YAML syntax. A user-defined mapping can contain an entry for every group in your LDAP server or only a subset of those groups. If there are groups on the LDAP server that do not have a user-defined name mapping, the default behavior during sync is to use the attribute specified as the OpenShift Container Platform group's name. User-defined name mapping groupUIDNameMapping: "cn=group1,ou=groups,dc=example,dc=com": firstgroup "cn=group2,ou=groups,dc=example,dc=com": secondgroup "cn=group3,ou=groups,dc=example,dc=com": thirdgroup 17.1.1. About the RFC 2307 configuration file The RFC 2307 schema requires you to provide an LDAP query definition for both user and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform records. For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships: Note If using user-defined name mappings, your configuration file will differ. LDAP sync configuration that uses RFC 2307 schema: rfc2307_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false 1 The IP address and host of the LDAP server where this group's record is stored. 2 When false , secure LDAP ( ldaps:// ) URLs connect using TLS, and insecure LDAP ( ldap:// ) URLs are upgraded to TLS. When true , no TLS connection is made to the server and you cannot use ldaps:// URL schemes. 3 The attribute that uniquely identifies a group on the LDAP server. You cannot specify groupsQuery filters when using DN for groupUIDAttribute . For fine-grained filtering, use the whitelist / blacklist method. 4 The attribute to use as the name of the group. 5 The attribute on the group that stores the membership information. 6 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 7 The attribute to use as the name of the user in the OpenShift Container Platform group record. 17.1.2. About the Active Directory configuration file The Active Directory schema requires you to provide an LDAP query definition for user entries, as well as the attributes to represent them with in the internal OpenShift Container Platform group records. For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, but define the name of the group by the name of the group on the LDAP server. The following configuration file creates these relationships: LDAP sync configuration that uses Active Directory schema: active_directory_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2 1 The attribute to use as the name of the user in the OpenShift Container Platform group record. 2 The attribute on the user that stores the membership information. 17.1.3. About the augmented Active Directory configuration file The augmented Active Directory schema requires you to provide an LDAP query definition for both user entries and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform group records. For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships. LDAP sync configuration that uses augmented Active Directory schema: augmented_active_directory_config.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4 1 The attribute that uniquely identifies a group on the LDAP server. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 2 The attribute to use as the name of the group. 3 The attribute to use as the name of the user in the OpenShift Container Platform group record. 4 The attribute on the user that stores the membership information. 17.2. Running LDAP sync Once you have created a sync configuration file, you can begin to sync. OpenShift Container Platform allows administrators to perform a number of different sync types with the same server. 17.2.1. Syncing the LDAP server with OpenShift Container Platform You can sync all groups from the LDAP server with OpenShift Container Platform. Prerequisites Create a sync configuration file. Procedure To sync all groups from the LDAP server with OpenShift Container Platform: USD oc adm groups sync --sync-config=config.yaml --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records. 17.2.2. Syncing OpenShift Container Platform groups with the LDAP server You can sync all groups already in OpenShift Container Platform that correspond to groups in the LDAP server specified in the configuration file. Prerequisites Create a sync configuration file. Procedure To sync OpenShift Container Platform groups with the LDAP server: USD oc adm groups sync --type=openshift --sync-config=config.yaml --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records. 17.2.3. Syncing subgroups from the LDAP server with OpenShift Container Platform You can sync a subset of LDAP groups with OpenShift Container Platform using whitelist files, blacklist files, or both. Note You can use any combination of blacklist files, whitelist files, or whitelist literals. Whitelist and blacklist files must contain one unique group identifier per line, and you can include whitelist literals directly in the command itself. These guidelines apply to groups found on LDAP servers as well as groups already present in OpenShift Container Platform. Prerequisites Create a sync configuration file. Procedure To sync a subset of LDAP groups with OpenShift Container Platform, use any the following commands: USD oc adm groups sync --whitelist=<whitelist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync --blacklist=<blacklist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync <group_unique_identifier> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync <group_unique_identifier> \ --whitelist=<whitelist_file> \ --blacklist=<blacklist_file> \ --sync-config=config.yaml \ --confirm USD oc adm groups sync --type=openshift \ --whitelist=<whitelist_file> \ --sync-config=config.yaml \ --confirm Note By default, all group synchronization operations are dry-run, so you must set the --confirm flag on the oc adm groups sync command to make changes to OpenShift Container Platform group records. 17.3. Running a group pruning job An administrator can also choose to remove groups from OpenShift Container Platform records if the records on the LDAP server that created them are no longer present. The prune job will accept the same sync configuration file and whitelists or blacklists as used for the sync job. For example: USD oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm USD oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm USD oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm 17.4. Automatically syncing LDAP groups You can automatically sync LDAP groups on a periodic basis by configuring a cron job. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have configured an LDAP identity provider (IDP). This procedure assumes that you created an LDAP secret named ldap-secret and a config map named ca-config-map . Procedure Create a project where the cron job will run: USD oc new-project ldap-sync 1 1 This procedure uses a project called ldap-sync . Locate the secret and config map that you created when configuring the LDAP identity provider and copy them to this new project. The secret and config map exist in the openshift-config project and must be copied to the new ldap-sync project. Define a service account: Example ldap-sync-service-account.yaml kind: ServiceAccount apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync Create the service account: USD oc create -f ldap-sync-service-account.yaml Define a cluster role: Example ldap-sync-cluster-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ldap-group-syncer rules: - apiGroups: - '' - user.openshift.io resources: - groups verbs: - get - list - create - update Create the cluster role: USD oc create -f ldap-sync-cluster-role.yaml Define a cluster role binding to bind the cluster role to the service account: Example ldap-sync-cluster-role-binding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ldap-group-syncer subjects: - kind: ServiceAccount name: ldap-group-syncer 1 namespace: ldap-sync roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ldap-group-syncer 2 1 Reference to the service account created earlier in this procedure. 2 Reference to the cluster role created earlier in this procedure. Create the cluster role binding: USD oc create -f ldap-sync-cluster-role-binding.yaml Define a config map that specifies the sync configuration file: Example ldap-sync-config-map.yaml kind: ConfigMap apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync data: sync.yaml: | 1 kind: LDAPSyncConfig apiVersion: v1 url: ldaps://10.0.0.0:389 2 insecure: false bindDN: cn=admin,dc=example,dc=com 3 bindPassword: file: "/etc/secrets/bindPassword" ca: /etc/ldap-ca/ca.crt rfc2307: 4 groupsQuery: baseDN: "ou=groups,dc=example,dc=com" 5 scope: sub filter: "(objectClass=groupOfMembers)" derefAliases: never pageSize: 0 groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: "ou=users,dc=example,dc=com" 6 scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn userNameAttributes: [ uid ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false 1 Define the sync configuration file. 2 Specify the URL. 3 Specify the bindDN . 4 This example uses the RFC2307 schema; adjust values as necessary. You can also use a different schema. 5 Specify the baseDN for groupsQuery . 6 Specify the baseDN for usersQuery . Create the config map: USD oc create -f ldap-sync-config-map.yaml Define a cron job: Example ldap-sync-cron-job.yaml kind: CronJob apiVersion: batch/v1 metadata: name: ldap-group-syncer namespace: ldap-sync spec: 1 schedule: "*/30 * * * *" 2 concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 0 ttlSecondsAfterFinished: 1800 3 template: spec: containers: - name: ldap-group-sync image: "registry.redhat.io/openshift4/ose-cli:latest" command: - "/bin/bash" - "-c" - "oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm" 4 volumeMounts: - mountPath: "/etc/config" name: "ldap-sync-volume" - mountPath: "/etc/secrets" name: "ldap-bind-password" - mountPath: "/etc/ldap-ca" name: "ldap-ca" volumes: - name: "ldap-sync-volume" configMap: name: "ldap-group-syncer" - name: "ldap-bind-password" secret: secretName: "ldap-secret" 5 - name: "ldap-ca" configMap: name: "ca-config-map" 6 restartPolicy: "Never" terminationGracePeriodSeconds: 30 activeDeadlineSeconds: 500 dnsPolicy: "ClusterFirst" serviceAccountName: "ldap-group-syncer" 1 Configure the settings for the cron job. See "Creating cron jobs" for more information on cron job settings. 2 The schedule for the job specified in cron format . This example cron job runs every 30 minutes. Adjust the frequency as necessary, making sure to take into account how long the sync takes to run. 3 How long, in seconds, to keep finished jobs. This should match the period of the job schedule in order to clean old failed jobs and prevent unnecessary alerts. For more information, see TTL-after-finished Controller in the Kubernetes documentation. 4 The LDAP sync command for the cron job to run. Passes in the sync configuration file that was defined in the config map. 5 This secret was created when the LDAP IDP was configured. 6 This config map was created when the LDAP IDP was configured. Create the cron job: USD oc create -f ldap-sync-cron-job.yaml Additional resources Configuring an LDAP identity provider Creating cron jobs 17.5. LDAP group sync examples This section contains examples for the RFC 2307, Active Directory, and augmented Active Directory schemas. Note These examples assume that all users are direct members of their respective groups. Specifically, no groups have other groups as members. See the Nested Membership Sync Example for information on how to sync nested groups. 17.5.1. Syncing groups using the RFC 2307 schema For the RFC 2307 schema, the following examples synchronize a group named admins that has two members: Jane and Jim . The examples explain: How the group and users are added to the LDAP server. What the resulting group record in OpenShift Container Platform will be after synchronization. Note These examples assume that all users are direct members of their respective groups. Specifically, no groups have other groups as members. See the Nested Membership Sync Example for information on how to sync nested groups. In the RFC 2307 schema, both users (Jane and Jim) and groups exist on the LDAP server as first-class entries, and group membership is stored in attributes on the group. The following snippet of ldif defines the users and group for this schema: LDAP entries that use RFC 2307 schema: rfc2307.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com 1 The group is a first-class entry in the LDAP server. 2 Members of a group are listed with an identifying reference as attributes on the group. Prerequisites Create the configuration file. Procedure Run the sync with the rfc2307_config.yaml file: USD oc adm groups sync --sync-config=rfc2307_config.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the rfc2307_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. 17.5.2. Syncing groups using the RFC2307 schema with user-defined name mappings When syncing groups with user-defined name mappings, the configuration file changes to contain these mappings as shown below. LDAP sync configuration that uses RFC 2307 schema with user-defined name mappings: rfc2307_config_user_defined.yaml kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: "cn=admins,ou=groups,dc=example,dc=com": Administrators 1 rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false 1 The user-defined name mapping. 2 The unique identifier attribute that is used for the keys in the user-defined name mapping. You cannot specify groupsQuery filters when using DN for groupUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 3 The attribute to name OpenShift Container Platform groups with if their unique identifier is not in the user-defined name mapping. 4 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. Prerequisites Create the configuration file. Procedure Run the sync with the rfc2307_config_user_defined.yaml file: USD oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the rfc2307_config_user_defined.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected] 1 The name of the group as specified by the user-defined name mapping. 17.5.3. Syncing groups using RFC 2307 with user-defined error tolerances By default, if the groups being synced contain members whose entries are outside of the scope defined in the member query, the group sync fails with an error: This often indicates a misconfigured baseDN in the usersQuery field. However, in cases where the baseDN intentionally does not contain some of the members of the group, setting tolerateMemberOutOfScopeErrors: true allows the group sync to continue. Out of scope members will be ignored. Similarly, when the group sync process fails to locate a member for a group, it fails outright with errors: This often indicates a misconfigured usersQuery field. However, in cases where the group contains member entries that are known to be missing, setting tolerateMemberNotFoundErrors: true allows the group sync to continue. Problematic members will be ignored. Warning Enabling error tolerances for the LDAP group sync causes the sync process to ignore problematic member entries. If the LDAP group sync is not configured correctly, this could result in synced OpenShift Container Platform groups missing members. LDAP entries that use RFC 2307 schema with problematic group membership: rfc2307_problematic_users.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2 1 A member that does not exist on the LDAP server. 2 A member that may exist, but is not under the baseDN in the user query for the sync job. To tolerate the errors in the above example, the following additions to your sync configuration file must be made: LDAP sync configuration that uses RFC 2307 schema tolerating errors: rfc2307_config_tolerating.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: "ou=groups,dc=example,dc=com" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3 1 The attribute that uniquely identifies a user on the LDAP server. You cannot specify usersQuery filters when using DN for userUIDAttribute. For fine-grained filtering, use the whitelist / blacklist method. 2 When true , the sync job tolerates groups for which some members were not found, and members whose LDAP entries are not found are ignored. The default behavior for the sync job is to fail if a member of a group is not found. 3 When true , the sync job tolerates groups for which some members are outside the user scope given in the usersQuery base DN, and members outside the member query scope are ignored. The default behavior for the sync job is to fail if a member of a group is out of scope. Prerequisites Create the configuration file. Procedure Run the sync with the rfc2307_config_tolerating.yaml file: USD oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the rfc2307_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected] 1 The users that are members of the group, as specified by the sync file. Members for which lookup encountered tolerated errors are absent. 17.5.4. Syncing groups using the Active Directory schema In the Active Directory schema, both users (Jane and Jim) exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user. The following snippet of ldif defines the users and group for this schema: LDAP entries that use Active Directory schema: active_directory.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins 1 The user's group memberships are listed as attributes on the user, and the group does not exist as an entry on the server. The memberOf attribute does not have to be a literal attribute on the user; in some LDAP servers, it is created during search and returned to the client, but not committed to the database. Prerequisites Create the configuration file. Procedure Run the sync with the active_directory_config.yaml file: USD oc adm groups sync --sync-config=active_directory_config.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the active_directory_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as listed in the LDAP server. 5 The users that are members of the group, named as specified by the sync file. 17.5.5. Syncing groups using the augmented Active Directory schema In the augmented Active Directory schema, both users (Jane and Jim) and groups exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user. The following snippet of ldif defines the users and group for this schema: LDAP entries that use augmented Active Directory schema: augmented_active_directory.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com 1 The user's group memberships are listed as attributes on the user. 2 The group is a first-class entry on the LDAP server. Prerequisites Create the configuration file. Procedure Run the sync with the augmented_active_directory_config.yaml file: USD oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the augmented_active_directory_config.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. 17.5.5.1. LDAP nested membership sync example Groups in OpenShift Container Platform do not nest. The LDAP server must flatten group membership before the data can be consumed. Microsoft's Active Directory Server supports this feature via the LDAP_MATCHING_RULE_IN_CHAIN rule, which has the OID 1.2.840.113556.1.4.1941 . Furthermore, only explicitly whitelisted groups can be synced when using this matching rule. This section has an example for the augmented Active Directory schema, which synchronizes a group named admins that has one user Jane and one group otheradmins as members. The otheradmins group has one user member: Jim . This example explains: How the group and users are added to the LDAP server. What the LDAP sync configuration file looks like. What the resulting group record in OpenShift Container Platform will be after synchronization. In the augmented Active Directory schema, both users ( Jane and Jim ) and groups exist in the LDAP server as first-class entries, and group membership is stored in attributes on the user or the group. The following snippet of ldif defines the users and groups for this schema: LDAP entries that use augmented Active Directory schema with nested members: augmented_active_directory_nested.ldif dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com 1 2 5 The user's and group's memberships are listed as attributes on the object. 3 4 The groups are first-class entries on the LDAP server. 6 The otheradmins group is a member of the admins group. When syncing nested groups with Active Directory, you must provide an LDAP query definition for both user entries and group entries, as well as the attributes with which to represent them in the internal OpenShift Container Platform group records. Furthermore, certain changes are required in this configuration: The oc adm groups sync command must explicitly whitelist groups. The user's groupMembershipAttributes must include "memberOf:1.2.840.113556.1.4.1941:" to comply with the LDAP_MATCHING_RULE_IN_CHAIN rule. The groupUIDAttribute must be set to dn . The groupsQuery : Must not set filter . Must set a valid derefAliases . Should not set baseDN as that value is ignored. Should not set scope as that value is ignored. For clarity, the group you create in OpenShift Container Platform should use attributes other than the distinguished name whenever possible for user- or administrator-facing fields. For example, identify the users of an OpenShift Container Platform group by their e-mail, and use the name of the group as the common name. The following configuration file creates these relationships: LDAP sync configuration that uses augmented Active Directory schema with nested members: augmented_active_directory_config_nested.yaml kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: "ou=users,dc=example,dc=com" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ "memberOf:1.2.840.113556.1.4.1941:" ] 5 1 groupsQuery filters cannot be specified. The groupsQuery base DN and scope values are ignored. groupsQuery must set a valid derefAliases . 2 The attribute that uniquely identifies a group on the LDAP server. It must be set to dn . 3 The attribute to use as the name of the group. 4 The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations. 5 The attribute on the user that stores the membership information. Note the use of LDAP_MATCHING_RULE_IN_CHAIN . Prerequisites Create the configuration file. Procedure Run the sync with the augmented_active_directory_config_nested.yaml file: USD oc adm groups sync \ 'cn=admins,ou=groups,dc=example,dc=com' \ --sync-config=augmented_active_directory_config_nested.yaml \ --confirm Note You must explicitly whitelist the cn=admins,ou=groups,dc=example,dc=com group. OpenShift Container Platform creates the following group record as a result of the above sync operation: OpenShift Container Platform group created by using the augmented_active_directory_config_nested.yaml file apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected] 1 The last time this OpenShift Container Platform group was synchronized with the LDAP server, in ISO 6801 format. 2 The unique identifier for the group on the LDAP server. 3 The IP address and host of the LDAP server where this group's record is stored. 4 The name of the group as specified by the sync file. 5 The users that are members of the group, named as specified by the sync file. Note that members of nested groups are included since the group membership was flattened by the Microsoft Active Directory Server. 17.6. LDAP sync configuration specification The object specification for the configuration file is below. Note that the different schema objects have different fields. For example, v1.ActiveDirectoryConfig has no groupsQuery field whereas v1.RFC2307Config and v1.AugmentedActiveDirectoryConfig both do. Important There is no support for binary attributes. All attribute data coming from the LDAP server must be in the format of a UTF-8 encoded string. For example, never use a binary attribute, such as objectGUID , as an ID attribute. You must use string attributes, such as sAMAccountName or userPrincipalName , instead. 17.6.1. v1.LDAPSyncConfig LDAPSyncConfig holds the necessary configuration options to define an LDAP group sync. Name Description Schema kind String value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#types-kinds string apiVersion Defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#resources string url Host is the scheme, host and port of the LDAP server to connect to: scheme://host:port string bindDN Optional DN to bind to the LDAP server with. string bindPassword Optional password to bind with during the search phase. v1.StringSource insecure If true , indicates the connection should not use TLS. If false , ldaps:// URLs connect using TLS, and ldap:// URLs are upgraded to a TLS connection using StartTLS as specified in https://tools.ietf.org/html/rfc2830 . If you set insecure to true , you cannot use ldaps:// URL schemes. boolean ca Optional trusted certificate authority bundle to use when making requests to the server. If empty, the default system roots are used. string groupUIDNameMapping Optional direct mapping of LDAP group UIDs to OpenShift Container Platform group names. object rfc2307 Holds the configuration for extracting data from an LDAP server set up in a fashion similar to RFC2307: first-class group and user entries, with group membership determined by a multi-valued attribute on the group entry listing its members. v1.RFC2307Config activeDirectory Holds the configuration for extracting data from an LDAP server set up in a fashion similar to that used in Active Directory: first-class user entries, with group membership determined by a multi-valued attribute on members listing groups they are a member of. v1.ActiveDirectoryConfig augmentedActiveDirectory Holds the configuration for extracting data from an LDAP server set up in a fashion similar to that used in Active Directory as described above, with one addition: first-class group entries exist and are used to hold metadata but not group membership. v1.AugmentedActiveDirectoryConfig 17.6.2. v1.StringSource StringSource allows specifying a string inline, or externally via environment variable or file. When it contains only a string value, it marshals to a simple JSON string. Name Description Schema value Specifies the cleartext value, or an encrypted value if keyFile is specified. string env Specifies an environment variable containing the cleartext value, or an encrypted value if the keyFile is specified. string file References a file containing the cleartext value, or an encrypted value if a keyFile is specified. string keyFile References a file containing the key to use to decrypt the value. string 17.6.3. v1.LDAPQuery LDAPQuery holds the options necessary to build an LDAP query. Name Description Schema baseDN DN of the branch of the directory where all searches should start from. string scope The optional scope of the search. Can be base : only the base object, one : all objects on the base level, sub : the entire subtree. Defaults to sub if not set. string derefAliases The optional behavior of the search with regards to aliases. Can be never : never dereference aliases, search : only dereference in searching, base : only dereference in finding the base object, always : always dereference. Defaults to always if not set. string timeout Holds the limit of time in seconds that any request to the server can remain outstanding before the wait for a response is given up. If this is 0 , no client-side limit is imposed. integer filter A valid LDAP search filter that retrieves all relevant entries from the LDAP server with the base DN. string pageSize Maximum preferred page size, measured in LDAP entries. A page size of 0 means no paging will be done. integer 17.6.4. v1.RFC2307Config RFC2307Config holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the RFC2307 schema. Name Description Schema groupsQuery Holds the template for an LDAP query that returns group entries. v1.LDAPQuery groupUIDAttribute Defines which attribute on an LDAP group entry will be interpreted as its unique identifier. ( ldapGroupUID ) string groupNameAttributes Defines which attributes on an LDAP group entry will be interpreted as its name to use for an OpenShift Container Platform group. string array groupMembershipAttributes Defines which attributes on an LDAP group entry will be interpreted as its members. The values contained in those attributes must be queryable by your UserUIDAttribute . string array usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userUIDAttribute Defines which attribute on an LDAP user entry will be interpreted as its unique identifier. It must correspond to values that will be found from the GroupMembershipAttributes . string userNameAttributes Defines which attributes on an LDAP user entry will be used, in order, as its OpenShift Container Platform user name. The first attribute with a non-empty value is used. This should match your PreferredUsername setting for your LDAPPasswordIdentityProvider . The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations. string array tolerateMemberNotFoundErrors Determines the behavior of the LDAP sync job when missing user entries are encountered. If true , an LDAP query for users that does not find any will be tolerated and an only and error will be logged. If false , the LDAP sync job will fail if a query for users doesn't find any. The default value is false . Misconfigured LDAP sync jobs with this flag set to true can cause group membership to be removed, so it is recommended to use this flag with caution. boolean tolerateMemberOutOfScopeErrors Determines the behavior of the LDAP sync job when out-of-scope user entries are encountered. If true , an LDAP query for a user that falls outside of the base DN given for the all user query will be tolerated and only an error will be logged. If false , the LDAP sync job will fail if a user query would search outside of the base DN specified by the all user query. Misconfigured LDAP sync jobs with this flag set to true can result in groups missing users, so it is recommended to use this flag with caution. boolean 17.6.5. v1.ActiveDirectoryConfig ActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the Active Directory schema. Name Description Schema usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userNameAttributes Defines which attributes on an LDAP user entry will be interpreted as its OpenShift Container Platform user name. The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations. string array groupMembershipAttributes Defines which attributes on an LDAP user entry will be interpreted as the groups it is a member of. string array 17.6.6. v1.AugmentedActiveDirectoryConfig AugmentedActiveDirectoryConfig holds the necessary configuration options to define how an LDAP group sync interacts with an LDAP server using the augmented Active Directory schema. Name Description Schema usersQuery Holds the template for an LDAP query that returns user entries. v1.LDAPQuery userNameAttributes Defines which attributes on an LDAP user entry will be interpreted as its OpenShift Container Platform user name. The attribute to use as the name of the user in the OpenShift Container Platform group record. mail or sAMAccountName are preferred choices in most installations. string array groupMembershipAttributes Defines which attributes on an LDAP user entry will be interpreted as the groups it is a member of. string array groupsQuery Holds the template for an LDAP query that returns group entries. v1.LDAPQuery groupUIDAttribute Defines which attribute on an LDAP group entry will be interpreted as its unique identifier. ( ldapGroupUID ) string groupNameAttributes Defines which attributes on an LDAP group entry will be interpreted as its name to use for an OpenShift Container Platform group. string array
[ "url: ldap://10.0.0.0:389 1 bindDN: cn=admin,dc=example,dc=com 2 bindPassword: <password> 3 insecure: false 4 ca: my-ldap-ca-bundle.crt 5", "baseDN: ou=users,dc=example,dc=com 1 scope: sub 2 derefAliases: never 3 timeout: 0 4 filter: (objectClass=person) 5 pageSize: 0 6", "groupUIDNameMapping: \"cn=group1,ou=groups,dc=example,dc=com\": firstgroup \"cn=group2,ou=groups,dc=example,dc=com\": secondgroup \"cn=group3,ou=groups,dc=example,dc=com\": thirdgroup", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 1 insecure: false 2 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 3 groupNameAttributes: [ cn ] 4 groupMembershipAttributes: [ member ] 5 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 6 userNameAttributes: [ mail ] 7 tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 activeDirectory: usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 1 groupMembershipAttributes: [ memberOf ] 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 1 groupNameAttributes: [ cn ] 2 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 3 groupMembershipAttributes: [ memberOf ] 4", "oc adm groups sync --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --sync-config=config.yaml --confirm", "oc adm groups sync --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --sync-config=config.yaml --confirm", "oc adm groups sync <group_unique_identifier> --whitelist=<whitelist_file> --blacklist=<blacklist_file> --sync-config=config.yaml --confirm", "oc adm groups sync --type=openshift --whitelist=<whitelist_file> --sync-config=config.yaml --confirm", "oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --whitelist=/path/to/whitelist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc adm prune groups --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm", "oc new-project ldap-sync 1", "kind: ServiceAccount apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync", "oc create -f ldap-sync-service-account.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ldap-group-syncer rules: - apiGroups: - '' - user.openshift.io resources: - groups verbs: - get - list - create - update", "oc create -f ldap-sync-cluster-role.yaml", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ldap-group-syncer subjects: - kind: ServiceAccount name: ldap-group-syncer 1 namespace: ldap-sync roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ldap-group-syncer 2", "oc create -f ldap-sync-cluster-role-binding.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: ldap-group-syncer namespace: ldap-sync data: sync.yaml: | 1 kind: LDAPSyncConfig apiVersion: v1 url: ldaps://10.0.0.0:389 2 insecure: false bindDN: cn=admin,dc=example,dc=com 3 bindPassword: file: \"/etc/secrets/bindPassword\" ca: /etc/ldap-ca/ca.crt rfc2307: 4 groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" 5 scope: sub filter: \"(objectClass=groupOfMembers)\" derefAliases: never pageSize: 0 groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" 6 scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn userNameAttributes: [ uid ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc create -f ldap-sync-config-map.yaml", "kind: CronJob apiVersion: batch/v1 metadata: name: ldap-group-syncer namespace: ldap-sync spec: 1 schedule: \"*/30 * * * *\" 2 concurrencyPolicy: Forbid jobTemplate: spec: backoffLimit: 0 ttlSecondsAfterFinished: 1800 3 template: spec: containers: - name: ldap-group-sync image: \"registry.redhat.io/openshift4/ose-cli:latest\" command: - \"/bin/bash\" - \"-c\" - \"oc adm groups sync --sync-config=/etc/config/sync.yaml --confirm\" 4 volumeMounts: - mountPath: \"/etc/config\" name: \"ldap-sync-volume\" - mountPath: \"/etc/secrets\" name: \"ldap-bind-password\" - mountPath: \"/etc/ldap-ca\" name: \"ldap-ca\" volumes: - name: \"ldap-sync-volume\" configMap: name: \"ldap-group-syncer\" - name: \"ldap-bind-password\" secret: secretName: \"ldap-secret\" 5 - name: \"ldap-ca\" configMap: name: \"ca-config-map\" 6 restartPolicy: \"Never\" terminationGracePeriodSeconds: 30 activeDeadlineSeconds: 500 dnsPolicy: \"ClusterFirst\" serviceAccountName: \"ldap-group-syncer\"", "oc create -f ldap-sync-cron-job.yaml", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 1 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com 2 member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=rfc2307_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "kind: LDAPSyncConfig apiVersion: v1 groupUIDNameMapping: \"cn=admins,ou=groups,dc=example,dc=com\": Administrators 1 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never pageSize: 0 userUIDAttribute: dn 4 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: false tolerateMemberOutOfScopeErrors: false", "oc adm groups sync --sync-config=rfc2307_config_user_defined.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: Administrators 1 users: - [email protected] - [email protected]", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with dn=\"<user-dn>\" would search outside of the base dn specified (dn=\"<base-dn>\")\".", "Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" refers to a non-existent entry\". Error determining LDAP group membership for \"<group>\": membership lookup for user \"<user>\" in group \"<group>\" failed because of \"search for entry with base dn=\"<user-dn>\" and filter \"<filter>\" did not return any results\".", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com member: cn=INVALID,ou=users,dc=example,dc=com 1 member: cn=Jim,ou=OUTOFSCOPE,dc=example,dc=com 2", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 rfc2307: groupsQuery: baseDN: \"ou=groups,dc=example,dc=com\" scope: sub derefAliases: never groupUIDAttribute: dn groupNameAttributes: [ cn ] groupMembershipAttributes: [ member ] usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never userUIDAttribute: dn 1 userNameAttributes: [ mail ] tolerateMemberNotFoundErrors: true 2 tolerateMemberOutOfScopeErrors: true 3", "oc adm groups sync --sync-config=rfc2307_config_tolerating.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com openshift.io/ldap.url: LDAP_SERVER_IP:389 creationTimestamp: name: admins users: 1 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: admins 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: admins", "oc adm groups sync --sync-config=active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: admins 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 2 objectClass: groupOfNames cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=Jim,ou=users,dc=example,dc=com", "oc adm groups sync --sync-config=augmented_active_directory_config.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]", "dn: ou=users,dc=example,dc=com objectClass: organizationalUnit ou: users dn: cn=Jane,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jane sn: Smith displayName: Jane Smith mail: [email protected] memberOf: cn=admins,ou=groups,dc=example,dc=com 1 dn: cn=Jim,ou=users,dc=example,dc=com objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: testPerson cn: Jim sn: Adams displayName: Jim Adams mail: [email protected] memberOf: cn=otheradmins,ou=groups,dc=example,dc=com 2 dn: ou=groups,dc=example,dc=com objectClass: organizationalUnit ou: groups dn: cn=admins,ou=groups,dc=example,dc=com 3 objectClass: group cn: admins owner: cn=admin,dc=example,dc=com description: System Administrators member: cn=Jane,ou=users,dc=example,dc=com member: cn=otheradmins,ou=groups,dc=example,dc=com dn: cn=otheradmins,ou=groups,dc=example,dc=com 4 objectClass: group cn: otheradmins owner: cn=admin,dc=example,dc=com description: Other System Administrators memberOf: cn=admins,ou=groups,dc=example,dc=com 5 6 member: cn=Jim,ou=users,dc=example,dc=com", "kind: LDAPSyncConfig apiVersion: v1 url: ldap://LDAP_SERVICE_IP:389 augmentedActiveDirectory: groupsQuery: 1 derefAliases: never pageSize: 0 groupUIDAttribute: dn 2 groupNameAttributes: [ cn ] 3 usersQuery: baseDN: \"ou=users,dc=example,dc=com\" scope: sub derefAliases: never filter: (objectclass=person) pageSize: 0 userNameAttributes: [ mail ] 4 groupMembershipAttributes: [ \"memberOf:1.2.840.113556.1.4.1941:\" ] 5", "oc adm groups sync 'cn=admins,ou=groups,dc=example,dc=com' --sync-config=augmented_active_directory_config_nested.yaml --confirm", "apiVersion: user.openshift.io/v1 kind: Group metadata: annotations: openshift.io/ldap.sync-time: 2015-10-13T10:08:38-0400 1 openshift.io/ldap.uid: cn=admins,ou=groups,dc=example,dc=com 2 openshift.io/ldap.url: LDAP_SERVER_IP:389 3 creationTimestamp: name: admins 4 users: 5 - [email protected] - [email protected]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/authentication_and_authorization/ldap-syncing
function::pn
function::pn Name function::pn - Returns the active probe name Synopsis Arguments None Description This function returns the script-level probe point associated with a currently running probe handler, including wild-card expansion effects. Context: The current probe point.
[ "pn:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-pn
Chapter 4. Technology Previews
Chapter 4. Technology Previews This chapter provides a list of all available Technology Previews in Red Hat Enterprise Linux 6.7. Technology Preview features are currently not supported under Red Hat Enterprise Linux subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure. Customers may find these features useful in a non-production environment. Customers are also free to provide feedback and functionality suggestions for a Technology Preview feature before it becomes fully supported. Errata will be provided for high-severity security issues. During the development of a Technology Preview feature, additional components may become available to the public for testing. It is the intention of Red Hat clustering to fully support Technology Preview features in a future release. For information about the Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ . 4.1. Storage and File Systems dm-era Device Mapper The device-mapper-persistent-data package now provides tools to help use the new dm-era device mapper functionality released as a Technology Preview. The dm-era functionality keeps track of which blocks on a device were written within user-defined periods of time called an era . This functionality allows backup software to track changed blocks or restore the coherency of a cache after reverting changes. Cross Realm Kerberos Trust Functionality for samba4 Libraries The Cross Realm Kerberos Trust functionality provided by Identity Management, which relies on the capabilities of the samba4 client library, is included as a Technology Preview starting with Red Hat Enterprise Linux 6.4. This functionality uses the libndr-nbt library to prepare Connection-less Lightweight Directory Access Protocol (CLDAP) messages. Package: samba-3.6.23-20 System Information Gatherer and Reporter (SIGAR) The System Information Gatherer and Reporter (SIGAR) is a library and command-line tool for accessing operating system and hardware level information across multiple platforms and programming languages. In Red Hat Enterprise Linux 6.4 and later, SIGAR is considered a Technology Preview package. Package: sigar-1.6.5-0.4.git58097d9 DIF/DIX support DIF/DIX, is a new addition to the SCSI Standard and a Technology Preview in Red Hat Enterprise Linux 6. DIF/DIX increases the size of the commonly used 512-byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receive, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be checked by the storage device, and by the receiving HBA. The DIF/DIX hardware checksum feature must only be used with applications that exclusively issue O_DIRECT I/O. These applications may use the raw block device, or the XFS file system in O_DIRECT mode. (XFS is the only file system that does not fall back to buffered I/O when doing certain allocation operations.) Only applications designed for use with O_DIRECT I/O and DIF/DIX hardware should enable this feature. For more information, refer to section Block Devices with DIF/DIX Enabled in the Storage Administration Guide . Package: kernel-2.6.32-554 LVM Application Programming Interface (API) Red Hat Enterprise Linux 6 features the new LVM application programming interface (API) as a Technology Preview. This API is used to query and control certain aspects of LVM. Package: lvm2-2.02.118-2 FS-Cache FS-Cache in Red Hat Enterprise Linux 6 enables networked file systems (for example, NFS) to have a persistent cache of data on the client machine. Package: cachefilesd-0.10.2-1
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/technology-previews
Kamelets Reference
Kamelets Reference Red Hat build of Apache Camel K 1.10.7 Kamelets Reference Red Hat build of Apache Camel K Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/getting_started_with_amq_streams_on_openshift/making-open-source-more-inclusive
4.4. Configuration Examples
4.4. Configuration Examples 4.4.1. Uploading to an FTP site The following example creates an FTP site that allows a dedicated user to upload files. It creates the directory structure and the required SELinux configuration changes: Run the setsebool ftp_home_dir=1 command as the root user to enable access to FTP home directories. Run the mkdir -p /myftp/pub command as the root user to create a new top-level directory. Set Linux permissions on the /myftp/pub/ directory to allow a Linux user write access. This example changes the owner and group from root to owner user1 and group root. Replace user1 with the user you want to give write access to: The chown command changes the owner and group permissions. The chmod command changes the mode, allowing the user1 user read, write, and execute permissions, and members of the root group read, write, and execute permissions. Everyone else has read and execute permissions, which allows the Apache HTTP Server to read files from this directory. When running SELinux, files and directories must be labeled correctly to allow access. Setting Linux permissions is not enough. Files labeled with the public_content_t type allow them to be read by FTP, Apache HTTP Server, Samba, and rsync. Files labeled with the public_content_rw_t type can be written to by FTP. Other services, such as Samba, require Booleans to be set before they can write to files labeled with the public_content_rw_t type. Label the top-level directory ( /myftp/ ) with the public_content_t type, to prevent copied or newly-created files under /myftp/ from being written to or modified by services. Run the following command as the root user to add the label change to file-context configuration: Run the restorecon -R -v /myftp/ command to apply the label change: Confirm /myftp is labeled with the public_content_t type, and /myftp/pub/ is labeled with the default_t type: FTP must be allowed to write to a directory before users can upload files via FTP. SELinux allows FTP to write to directories labeled with the public_content_rw_t type. This example uses /myftp/pub/ as the directory FTP can write to. Run the following command as the root user to add the label change to file-context configuration: Run the restorecon -R -v /myftp/pub command as the root user to apply the label change: The allow_ftpd_anon_write Boolean must be on to allow vsftpd to write to files that are labeled with the public_content_rw_t type. Run the following command as the root user to enable this Boolean: Do not use the -P option if you do not want changes to persist across reboots. The following example demonstrates logging in via FTP and uploading a file. This example uses the user1 user from the example, where user1 is the dedicated owner of the /myftp/pub/ directory: Run the cd ~/ command to change into your home directory. Then, run the mkdir myftp command to create a directory to store files to upload via FTP. Run the cd ~/myftp command to change into the ~/myftp/ directory. In this directory, create an ftpupload file. Copy the following contents into this file: Run the getsebool allow_ftpd_anon_write command to confirm the allow_ftpd_anon_write Boolean is on: If this Boolean is off, run the setsebool -P allow_ftpd_anon_write on command as the root user to enable it. Do not use the -P option if you do not want the change to persist across reboots. Run the service vsftpd start command as the root user to start vsftpd : Run the ftp localhost command. When prompted for a user name, enter the user name of the user who has write access, then, enter the correct password for that user: The upload succeeds as the allow_ftpd_anon_write Boolean is enabled.
[ "~]# chown user1:root /myftp/pub ~]# chmod 775 /myftp/pub", "~]# semanage fcontext -a -t public_content_t /myftp", "~]# restorecon -R -v /myftp/ restorecon reset /myftp context unconfined_u:object_r:default_t:s0->system_u:object_r:public_content_t:s0", "~]USD ls -dZ /myftp/ drwxr-xr-x. root root system_u:object_r:public_content_t:s0 /myftp/ ~]USD ls -dZ /myftp/pub/ drwxrwxr-x. user1 root unconfined_u:object_r:default_t:s0 /myftp/pub/", "~]# semanage fcontext -a -t public_content_rw_t \"/myftp/pub(/.*)?\"", "~]# restorecon -R -v /myftp/pub restorecon reset /myftp/pub context system_u:object_r:default_t:s0->system_u:object_r:public_content_rw_t:s0", "~]# setsebool -P allow_ftpd_anon_write on", "File upload via FTP from a home directory.", "~]USD getsebool allow_ftpd_anon_write allow_ftpd_anon_write --> on", "~]# service vsftpd start Starting vsftpd for vsftpd: [ OK ]", "~]USD ftp localhost Connected to localhost (127.0.0.1). 220 (vsFTPd 2.1.0) Name (localhost: username ): 331 Please specify the password. Password: Enter the correct password 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> cd myftp 250 Directory successfully changed. ftp> put ftpupload local: ftpupload remote: ftpupload 227 Entering Passive Mode (127,0,0,1,241,41). 150 Ok to send data. 226 File receive OK. ftp> 221 Goodbye." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-file_transfer_protocol-configuration_examples
Appendix A. Troubleshooting permission issues
Appendix A. Troubleshooting permission issues Satellite upgrades perform pre-upgrade checks. If the pre-upgrade check discovers permission issues, it fails with an error similar to the following one: If you see an error like this on your Satellite Server, identify and remedy the permission issues. Procedure On your Satellite Server, identify permission issues: Fix permission issues: Verification Rerun the check to ensure no permission issues remain:
[ "2024-01-29T20:50:09 [W|app|] Could not create role 'Ansible Roles Manager': ERF73-0602 [Foreman::PermissionMissingException]: some permissions were not found:", "satellite-maintain health check --label duplicate_permissions", "foreman-rake db:seed", "satellite-maintain health check --label duplicate_permissions" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_connected_red_hat_satellite_to_6.16/troubleshooting-permission-issues
10.5.5. KeepAlive
10.5.5. KeepAlive KeepAlive sets whether the server allows more than one request per connection and can be used to prevent any one client from consuming too much of the server's resources. By default Keepalive is set to off . If Keepalive is set to on and the server becomes very busy, the server can quickly spawn the maximum number of child processes. In this situation, the server slows down significantly. If Keepalive is enabled, it is a good idea to set the the KeepAliveTimeout low (refer to Section 10.5.7, " KeepAliveTimeout " for more information about the KeepAliveTimeout directive) and monitor the /var/log/httpd/error_log log file on the server. This log reports when the server is running out of child processes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-keepalive
Part III. Administration: Managing Servers
Part III. Administration: Managing Servers This part covers administration-related topics, such as managing the Identity Management server and services and replication between servers in an Identity Management domain, provides details on the Identity Management topology and gives instructions on how to update the Identity Management packages on the system. Furthermore, this part explains how to manually back up and restore the Identity Management system in case of a disaster affecting an Identity Management deployment. The final chapter details the different internal access control mechanisms.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.admin-guide-servers
Chapter 1. Introduction and Goals of This Guide
Chapter 1. Introduction and Goals of This Guide Congratulations! You now have quite a bit of experience with Red Hat JBoss Data Virtualization: you can perform a basic installation, run the software and use the Dashboard Builder to connect to data sources. It is now time to showcase some more of the product's features that you can utilise to build your own solutions going forward. These features are demonstrated in the various "quick starts" that come with the product.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/quick_starts_guide/introduction_and_goals_of_this_guide
4.308. subscription-manager
4.308. subscription-manager 4.308.1. RHBA-2011:1695 - subscription-manager bug fix and enhancement update Updated subscription-manager packages that fix several bugs and adds an enhancement are now available for Red Hat Enterprise Linux 6. The Subscription Manager tool allows users to understand the specific products which have been installed on their machines, and the specific subscriptions which their machines are consuming. Bug Fixes BZ# 709412 Two variations were used in the ProductName parameter for the workstation subscription: "Red Hat Enterprise Linux Workstation" and "Red Hat Enterprise Linux 6 Workstation". As a consequence, users running the "subscription-manager list --installed" command, while subscribed to the "Red Hat Enterprise Linux Workstation" subscription, got a misleading report. With this update, the ProductId parameter is used instead to compare subscriptions in the described scenario and the bug no longer occurs. BZ# 701425 Previously, the NSS (Network Security Services) library used the "first" entitlement certificate to access all content. As a consequence, access to subsequent repositories was rejected because NSS was using the certificate from the first repository. This update allows all repositories to be controlled by unique entitlement certificates. BZ# 703921 Previously, the "Start Date" and "End Date" fields in the Contract Selection dialog window of the subscription-manager-gui utility were not populated. With this update, the dates are displayed as expected. BZ# 711133 When the subscription-manager utility had been upgraded, it put incorrect data to the sslclientkey repository parameter value. Consequently, when the yum utility was launched to install software, yum terminated with the "[Errno 14] problem with the local client certificate" error message. The bug in subscription-manager has been fixed and yum can now be run without any certificate errors. BZ# 693709 Previously, running the firstboot utility with the "-r" option while the subscription-manager-firstboot utility was already installed caused the firstboot utility to terminate with a traceback; it also failed to display a message stating that the computer was already registered with Red Hat Network. This bug has been fixed, the firstboot utility now displays the warning message properly, and no tracebacks are returned in the described scenario. BZ# 730018 When a machine was registered via the Red Hat Subscription Manager tool, an attempt to register it with Red Hat Network Classic or Red Hat Network Satellite caused the rhn_register utility to display a confusing warning message. This update rephrases this warning message for clarity. BZ# 725535 Due to a bug, the code assumed it could access the /var/rhn/rhsm directory. Consequently, the background job, which warns the user that a machine is not covered by a subscription, terminated unexpectedly if the /var/rhn/rhsm directory did not exist. With this update, the error is properly logged and the crashes no longer occur in the described scenario. BZ# 580905 The initial subscription-manager package did not provide a help function built into the user interface. Help was only available via manual pages and online documentation. This update adds a standard help button to the subscription-manager user interface. BZ# 701315 Subscriptions are represented by numeric serial numbers. When a number longer than four bytes was used as a serial number, the subscription-manager utility terminated unexpectedly. This bug has been fixed and large numbers are now supported as serial numbers in subscription-manager. Enhancement BZ# 710172 Previously, in order to locate subscriptions for machines not covered by a subscription, a user had to log into each such machine and locate new subscriptions manually. This update allows a machine to automatically look for new subscriptions in the described scenario. Users of subscription-manager are advised to upgrade to these updated packages, which fix these bugs. 4.308.2. RHBA-2012:0562 - subscription-manager bug fix update Updated subscription-manager packages that fix one bug are now available for Red Hat Enterprise Linux 6. The subscription-manager packages provide programs and libraries to allow users to manage subscriptions and yum repositories from the Red Hat Entitlement platform. Bug Fix BZ# 812446 Previously, the subscription-manager utility could incorrectly delete a product certificate when running the yum utility and no yum repositories were enabled or in use (for example on newly installed Red Hat Enterprise Linux 6 systems). To prevent undesirable deletions of product certificates, subscription-manager now performs a verification check before deleting a certificate. In addition, all deletions of product certificates are now logged. All users of subscription-manager are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/subscription-manager
Chapter 8. Using CPU Manager and Topology Manager
Chapter 8. Using CPU Manager and Topology Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. Topology Manager collects hints from the CPU Manager, Device Manager, and other Hint Providers to align pod resources, such as CPU, SR-IOV VFs, and other device resources, for all Quality of Service (QoS) classes on the same non-uniform memory access (NUMA) node. Topology Manager uses topology information from the collected hints to decide if a pod can be accepted or rejected on a node, based on the configured Topology Manager policy and pod resources requested. Topology Manager is useful for workloads that use hardware accelerators to support latency-critical execution and high throughput parallel computation. To use Topology Manager you must configure CPU Manager with the static policy. 8.1. Setting up CPU Manager To configure CPU manager, create a KubeletConfig custom resource (CR) and apply it to the desired set of nodes. Procedure Label a node by running the following command: # oc label node perf-node.example.com cpumanager=true To enable CPU Manager for all compute nodes, edit the CR by running the following command: # oc edit machineconfigpool worker Add the custom-kubelet: cpumanager-enabled label to metadata.labels section. metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config by running the following command: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config by running the following command: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the compute node for the updated kubelet.conf file by running the following command: # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a project by running the following command: USD oc new-project <project_name> Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verification Verify that the pod is scheduled to the node that you labeled by running the following command: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that a CPU has been exclusively assigned to the pod by running the following command: # oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2 Example output NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process by running the following commands: # oc debug node/perf-node.example.com sh-4.2# systemctl status | grep -B5 pause Note If the output returns multiple pause process entries, you must identify the correct pause process. Example output # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Verify that pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice subdirectory by running the following commands: # cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "USDi "; cat USDi ; done Note Pods of other QoS tiers end up in child cgroups of the parent kubepods . Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task by running the following command: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod on the system cannot run on the core allocated for the Guaranteed pod. For example, to verify the pod in the besteffort QoS tier, run the following commands: # cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 8.2. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 8.3. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 8.4. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage.
[ "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc new-project <project_name>", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2", "NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m", "oc debug node/perf-node.example.com", "sh-4.2# systemctl status | grep -B5 pause", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope", "for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus", "oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/using-cpu-manager
Building, running, and managing containers
Building, running, and managing containers Red Hat Enterprise Linux 9 Using Podman, Buildah, and Skopeo on Red Hat Enterprise Linux 9 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/index
Chapter 6. Deploying the Shared File Systems service with native CephFS
Chapter 6. Deploying the Shared File Systems service with native CephFS CephFS is the highly scalable, open-source, distributed file system component of Red Hat Ceph Storage, a unified distributed storage platform. Ceph Storage implements object, block, and file storage using Reliable Autonomic Distributed Object Store (RADOS). CephFS, which is POSIX compatible, provides file access to a Ceph Storage cluster. The Shared File Systems service (manila) enables users to create shares in CephFS and access them using the native Ceph FS protocol. The Shared File Systems service manages the life cycle of these shares from within OpenStack. With this release, director can deploy the Shared File Systems with a native CephFS back end on the overcloud. Important This chapter pertains to the deployment and use of native CephFS to provide a self-service Shared File Systems service in your Red Hat OpenStack Platform(RHOSP) cloud through the native CephFS NAS protocol. This type of deployment requires guest VM access to Ceph public network and infrastructure. Deploy native CephFS with trusted OpenStack Platform tenants only, because it requires a permissive trust model that is not suitable for general purpose OpenStack Platform deployments. For general purpose OpenStack Platform deployments that use a conventional tenant trust model, you can deploy CephFS through the NFS protocol. 6.1. CephFS with native driver The CephFS native driver combines the OpenStack Shared File Systems service (manila) and Red Hat Ceph Storage. When you use Red Hat OpenStack (RHOSP) director, the Controller nodes host the Ceph daemons, such as the manager, metadata servers (MDS), and monitors (MON) and the Shared File Systems services. Compute nodes can host one or more projects. Projects, which were formerly referred to as tenants, are represented in the following graphic by the white boxes. Projects contain user-managed VMs, which are represented by gray boxes with two NICs. To access the ceph and manila daemons projects, connect to the daemons over the public Ceph storage network. On this network, you can access data on the storage nodes provided by the Ceph Object Storage Daemons (OSDs). Instances, or virtual machines (VMs), that are hosted on the project boot with two NICs: one dedicated to the storage provider network and the second to project-owned routers to the external provider network. The storage provider network connects the VMs that run on the projects to the public Ceph storage network. The Ceph public network provides back-end access to the Ceph object storage nodes, metadata servers (MDS), and Controller nodes. Using the native driver, CephFS relies on cooperation with the clients and servers to enforce quotas, guarantee project isolation, and for security. CephFS with the native driver works well in an environment with trusted end users on a private cloud. This configuration requires software that is running under user control to cooperate and work correctly. 6.2. Native CephFS back-end security The native CephFS back end requires a permissive trust model for Red Hat OpenStack Platform (RHOSP) tenants. This trust model is not appropriate for general purpose OpenStack Platform clouds that deliberately block users from directly accessing the infrastructure behind the services that the OpenStack Platform provides. With native CephFS, user Compute instances connect directly to the Ceph public network where the Ceph service daemons are exposed. CephFS clients that run on user VMs interact cooperatively with the Ceph service daemons, and they interact directly with RADOS to read and write file data blocks. CephFS quotas, which enforce Shared File Systems (manila) share sizes, are enforced on the client side, such as on VMs that are owned by (RHOSP) users. The client side software on user VMs might not be current, which can leave critical cloud infrastructure vulnerable to malicious or inadvertently harmful software that targets the Ceph service ports. Deploy native CephFS as a back end only in environments in which trusted users keep client-side software up to date. Ensure that no software that can impact the Red Hat Ceph Storage infrastructure runs on your VMs. For a general purpose RHOSP deployment that serves many untrusted users, deploy CephFS through NFS. For more information about using CephFS through NFS, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Users might not keep client-side software current, and they might fail to exclude harmful software from their VMs, but using CephFS through NFS, they only have access to the public side of an NFS server, not to the Ceph infrastructure itself. NFS does not require the same kind of cooperative client and, in the worst case, an attack from a user VM can damage the NFS gateway without damaging the Ceph Storage infrastructure behind it. You can expose the native CephFS back end to all trusted users, but you must enact the following security measures: Configure the storage network as a provider network. Impose role-based access control (RBAC) policies to secure the Storage provider network. Create a private share type. 6.3. Native CephFS deployment A typical native Ceph file system (CephFS) installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following components: RHOSP Controller nodes that run containerized Ceph metadata server (MDS), Ceph monitor (MON) and Shared File Systems (manila) services. Some of these services can coexist on the same node or they can have one or more dedicated nodes. Ceph Storage cluster with containerized object storage daemons (OSDs) that run on Ceph Storage nodes. An isolated storage network that serves as the Ceph public network on which the clients can communicate with Ceph service daemons. To facilitate this, the storage network is made available as a provider network for users to connect their VMs and mount CephFS shares. Important You cannot use the Shared File Systems service (manila) with the CephFS native driver to serve shares to OpenShift Container Platform through Manila CSI, because Red Hat does not support this type of deployment. For more information, contact Red Hat Support. The Shared File Systems (manila) service provides APIs that allow the tenants to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver , allows the Shared File Systems service to use native CephFS as a back end. You can install native CephFS in an integrated deployment managed by director. When director deploys the Shared File Systems service with a CephFS back end on the overcloud, it automatically creates the required data center storage network. However, you must create the corresponding storage provider network on the overcloud. For more information about network planning, see Overcloud networks in Director Installation and Usage . Although you can manually configure the Shared File Systems service by editing the /var/lib/config-data/puppet-generated/manila/etc/manila/manila.conf file for the node, any settings can be overwritten by the Red Hat OpenStack Platform director in future overcloud updates. Red Hat only supports deployments of the Shared File Systems service that are managed by director. 6.4. Requirements You can deploy a native CephFS back end with new or existing Red Hat OpenStack Platform (RHOSP) environments if you meet the following requirements: Use Red Hat OpenStack Platform version 17.0 or later. Configure a new Red Hat Ceph Storage cluster at the same time as the native CephFS back end. For information about how to deploy Ceph Storage, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Important The RHOSP Shared File Systems service (manila) with the native CephFS back end is supported for use with Red Hat Ceph Storage version 5.2 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . Install the Shared File Systems service on a Controller node. This is the default behavior. Use only a single instance of a CephFS back end for the Shared File Systems service. 6.5. File shares File shares are handled differently between the Shared File Systems service (manila), Ceph File System (CephFS), and CephFS through NFS. The Shared File Systems service provides shares, where a share is an individual file system namespace and a unit of storage with a defined size. Shared file system storage inherently allows multiple clients to connect, read, and write data to any given share, but you must give each client access to the share through the Shared File Systems service access control APIs before they can connect. With CephFS, a share is considered a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size share that the Shared File Systems service creates. Access to Ceph shares is determined by MDS authentication capabilities. With native CephFS, file shares are provisioned and accessed through the CephFS protocol. Access control is performed with a CephX authentication scheme that uses CephFS usernames. 6.6. Native CephFS isolated network Native CephFS deployments use the isolated storage network deployed by director as the Ceph public network. Clients use this network to communicate with various Ceph infrastructure service daemons. For more information about isolating networks, see Network isolation in Director Installation and Usage . 6.7. Deploying the native CephFS environment When you are ready to deploy the environment, use the openstack overcloud deploy command with the custom environments and roles required to configure the native CephFS back end. The openstack overcloud deploy command has the following options in addition to other required options. Action Option Additional Information Specify the network configuration with network_data.yaml [filename] -n /usr/share/openstack-tripleo-heat-templates/network_data.yaml You can use a custom environment file to override values for the default networks specified in this network data environment file. This is the default network data file that is available when you use isolated networks. You can omit this file from the openstack overcloud deploy command for brevity. Deploy the Ceph daemons. -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploy the Ceph metadata server with ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploy the manila service with the native CephFS back end. -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml Environment file The following example shows an openstack overcloud deploy command that includes options to deploy a Ceph cluster, Ceph MDS, the native CephFS back end, and the networks required for the Ceph cluster: For more information about the openstack overcloud deploy command, see Provisioning and deploying your overcloud in Director Installation and Usage . 6.8. Native CephFS back-end environment file The environment file for defining a native CephFS back end, manila-cephfsnative-config.yaml is located in the following path of an undercloud node: /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml . The manila-cephfsnative-config.yaml environment file contains settings relevant to the deployment of the Shared File Systems service. The back end default settings should work for most environments. The example shows the default values that director uses during deployment of the Shared File Systems service: The parameter_defaults header signifies the start of the configuration. Specifically, settings under this header let you override default values set in resource_registry . This includes values set by OS::Tripleo::Services::ManilaBackendCephFs , which sets defaults for a CephFS back end. 1 ManilaCephFSBackendName sets the name of the manila configuration of your CephFS backend. In this case, the default back end name is cephfs . 2 ManilaCephFSDriverHandlesShareServers controls the lifecycle of the share server. When set to false , the driver does not handle the lifecycle. This is the only supported option for CephFS back ends. 3 ManilaCephFSCephFSAuthId defines the Ceph auth ID that the director creates for the manila service to access the Ceph cluster. 4 ManilaCephFSCephFSEnableSnapshots controls snapshot activation. Snapshots are supported With Ceph Storage 4.1 and later, but the value of this parameter defaults to false . You can set the value to true to ensure that the driver reports the snapshot_support capability to the manila scheduler. 5 ManilaCephFSCephVolumeMode controls the UNIX permissions to set against the manila share created on the native CephFS back end. The value defaults to 755 . 6 ManilaCephFSCephFSProtocolHelperType must be set to CEPHFS to use the native CephFS driver. For more information about environment files, see Environment Files in the Director Installation and Usage guide.
[ "[stack@undercloud ~]USD openstack overcloud deploy -n /usr/share/openstack-tripleo-heat-templates/network_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml", "[stack@undercloud ~]USD cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml A Heat environment file which can be used to enable a a Manila CephFS Native driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: true 4 ManilaCephFSCephVolumeMode: '0755' 5 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'CEPHFS' 6" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_deploying-the-shared-file-systems-service-with-native-cephfs_deployingcontainerizedrhcs
Chapter 5. Kernel parameters
Chapter 5. Kernel parameters Parameter Description BridgeNfCallArpTables Configures sysctl net.bridge.bridge-nf-call-arptables key. The default value is 1 . BridgeNfCallIp6Tables Configures sysctl net.bridge.bridge-nf-call-ip6tables key. The default value is 1 . BridgeNfCallIpTables Configures sysctl net.bridge.bridge-nf-call-iptables key. The default value is 1 . ExtraKernelModules Hash of extra kernel modules to load. ExtraKernelPackages List of extra kernel related packages to install. ExtraSysctlSettings Hash of extra sysctl settings to apply. InotifyIntancesMax Configures sysctl fs.inotify.max_user_instances key. The default value is 1024 . KernelDisableIPv6 Configures sysctl net.ipv6.{default/all}.disable_ipv6 keys. The default value is 0 . KernelIpForward Configures net.ipv4.ip_forward key. The default value is 1 . KernelIpNonLocalBind Configures net.ipv{4,6}.ip_nonlocal_bind key. The default value is 1 . KernelPidMax Configures sysctl kernel.pid_max key. The default value is 1048576 . NeighbourGcThreshold1 Configures sysctl net.ipv4.neigh.default.gc_thresh1 value. This is the minimum number of entries to keep in the ARP cache. The garbage collector will not run if there are fewer than this number of entries in the cache. The default value is 1024 . NeighbourGcThreshold2 Configures sysctl net.ipv4.neigh.default.gc_thresh2 value. This is the soft maximum number of entries to keep in the ARP cache. The garbage collector will allow the number of entries to exceed this for 5 seconds before collection will be performed. The default value is 2048 . NeighbourGcThreshold3 Configures sysctl net.ipv4.neigh.default.gc_thresh3 value. This is the hard maximum number of entries to keep in the ARP cache. The garbage collector will always run if there are more than this number of entries in the cache. The default value is 4096 .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/overcloud_parameters/kernel-parameters
16.6. Random Number Generator Device
16.6. Random Number Generator Device Random number generators are very important for operating system security. For securing virtual operating systems, Red Hat Enterprise Linux 7 includes virtio-rng , a virtual hardware random number generator device that can provide the guest with fresh entropy on request. On the host physical machine, the hardware RNG interface creates a chardev at /dev/hwrng , which can be opened and then read to fetch entropy from the host physical machine. In co-operation with the rngd daemon, the entropy from the host physical machine can be routed to the guest virtual machine's /dev/random , which is the primary source of randomness. Using a random number generator is particularly useful when a device such as a keyboard, mouse, and other inputs are not enough to generate sufficient entropy on the guest virtual machine. The virtual random number generator device allows the host physical machine to pass through entropy to guest virtual machine operating systems. This procedure can be performed using either the command line or the virt-manager interface. For instructions, see below. For more information about virtio-rng , see Red Hat Enterprise Linux Virtual Machines: Access to Random Numbers Made Easy . Procedure 16.11. Implementing virtio-rng using the Virtual Machine Manager Shut down the guest virtual machine. Select the guest virtual machine and from the Edit menu, select Virtual Machine Details , to open the Details window for the specified guest virtual machine. Click the Add Hardware button. In the Add New Virtual Hardware window, select RNG to open the Random Number Generator window. Figure 16.20. Random Number Generator window Enter the intended parameters and click Finish when done. The parameters are explained in virtio-rng elements . Procedure 16.12. Implementing virtio-rng using command-line tools Shut down the guest virtual machine. Using the virsh edit domain-name command, open the XML file for the intended guest virtual machine. Edit the <devices> element to include the following: ... <devices> <rng model='virtio'> <rate period='2000' bytes='1234'/> <backend model='random'>/dev/random</backend> <!-- OR --> <backend model='egd' type='udp'> <source mode='bind' service='1234'/> <source mode='connect' host='1.2.3.4' service='1234'/> </backend> </rng> </devices> ... Figure 16.21. Random number generator device The random number generator device allows the following XML attributes and elements: virtio-rng elements <model> - The required model attribute specifies what type of RNG device is provided. <backend model> - The <backend> element specifies the source of entropy to be used for the guest. The source model is configured using the model attribute. Supported source models include 'random' and 'egd' . <backend model='random'> - This <backend> type expects a non-blocking character device as input. Examples of such devices are /dev/random and /dev/urandom . The file name is specified as contents of the <backend> element. When no file name is specified the hypervisor default is used. <backend model='egd'> - This back end connects to a source using the EGD protocol. The source is specified as a character device. See character device host physical machine interface for more information.
[ "<devices> <rng model='virtio'> <rate period='2000' bytes='1234'/> <backend model='random'>/dev/random</backend> <!-- OR --> <backend model='egd' type='udp'> <source mode='bind' service='1234'/> <source mode='connect' host='1.2.3.4' service='1234'/> </backend> </rng> </devices>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_device_configuration-random_number_generator_device
Backup and restore
Backup and restore OpenShift Container Platform 4.10 Backing up and restoring your OpenShift Container Platform cluster Red Hat OpenShift Documentation Team
[ "oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\\.openshift\\.io/certificate-not-after}{\"\\n\"}'", "2023-08-05T14:37:50Z", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-5.1# openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -noout -enddate", "notAfter=Jun 6 10:50:07 2023 GMT", "sh-5.1# openssl x509 -in /var/lib/kubelet/pki/kubelet-server-current.pem -noout -enddate", "notAfter=Jun 6 10:50:07 2023 GMT", "for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1", "Starting pod/ip-10-0-130-169us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod Starting pod/ip-10-0-150-116us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel.", "for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 10; done", "oc get nodes -l node-role.kubernetes.io/master", "NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.23.0 ip-10-0-170-223.ec2.internal Ready master 75m v1.23.0 ip-10-0-211-16.ec2.internal Ready master 75m v1.23.0", "oc get csr", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc get nodes -l node-role.kubernetes.io/worker", "NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.23.0 ip-10-0-182-134.ec2.internal Ready worker 64m v1.23.0 ip-10-0-250-100.ec2.internal Ready worker 64m v1.23.0", "oc get csr", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.10.0 True False False 59m cloud-credential 4.10.0 True False False 85m cluster-autoscaler 4.10.0 True False False 73m config-operator 4.10.0 True False False 73m console 4.10.0 True False False 62m csi-snapshot-controller 4.10.0 True False False 66m dns 4.10.0 True False False 76m etcd 4.10.0 True False False 76m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.23.0 ip-10-0-170-223.ec2.internal Ready master 82m v1.23.0 ip-10-0-179-95.ec2.internal Ready worker 70m v1.23.0 ip-10-0-182-134.ec2.internal Ready worker 70m v1.23.0 ip-10-0-211-16.ec2.internal Ready master 82m v1.23.0 ip-10-0-250-100.ec2.internal Ready worker 69m v1.23.0", "oc annotate --overwrite namespace/openshift-adp volsync.backube/privileged-movers='true'", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - name: default velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: limits: cpu: \"1\" memory: 512Mi requests: cpu: 500m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift 1 - aws resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 5 prefix: <prefix> 6 config: region: <region> profile: \"default\" credential: key: cloud name: cloud-credentials 7 snapshotLocations: 8 - name: default velero: provider: aws config: region: <region> 9 profile: \"default\"", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list --account-name USDAZURE_STORAGE_ACCOUNT_ID --query \"[?keyName == 'key1'].value\" -o tsv`", "AZURE_ROLE=Velero az role definition create --role-definition '{ \"Name\": \"'USDAZURE_ROLE'\", \"Description\": \"Velero related permissions to perform backups, restores and deletions\", \"Actions\": [ \"Microsoft.Compute/disks/read\", \"Microsoft.Compute/disks/write\", \"Microsoft.Compute/disks/endGetAccess/action\", \"Microsoft.Compute/disks/beginGetAccess/action\", \"Microsoft.Compute/snapshots/read\", \"Microsoft.Compute/snapshots/write\", \"Microsoft.Compute/snapshots/delete\", \"Microsoft.Storage/storageAccounts/listkeys/action\", \"Microsoft.Storage/storageAccounts/regeneratekey/action\" ], \"AssignableScopes\": [\"/subscriptions/'USDAZURE_SUBSCRIPTION_ID'\"] }'", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_STORAGE_ACCOUNT_ACCESS_KEY=USD{AZURE_STORAGE_ACCOUNT_ACCESS_KEY} 1 AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: limits: cpu: \"1\" memory: 512Mi requests: cpu: 500m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - azure - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 5 storageAccount: <azure_storage_account_id> 6 subscriptionId: <azure_subscription_id> 7 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 8 provider: azure default: true objectStorage: bucket: <bucket_name> 9 prefix: <prefix> 10 snapshotLocations: 11 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: limits: cpu: \"1\" memory: 512Mi requests: cpu: 500m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - gcp - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: provider: gcp default: true credential: key: cloud name: cloud-credentials-gcp 5 objectStorage: bucket: <bucket_name> 6 prefix: <prefix> 7 snapshotLocations: 8 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 9", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: minio s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: limits: cpu: \"1\" memory: 512Mi requests: cpu: 500m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - aws - openshift 1 resourceTimeout: 10m 2 restic: enable: true 3 podConfig: nodeSelector: <node_selector> 4 backupLocations: - velero: config: profile: \"default\" region: minio s3Url: <url> 5 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 6 objectStorage: bucket: <bucket_name> 7 prefix: <prefix> 8", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero", "oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node selector> 1 resourceAllocations: limits: cpu: \"1\" memory: 512Mi requests: cpu: 500m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: configuration: velero: defaultPlugins: - kubevirt 1 - gcp 2 - csi 3 - openshift 4 resourceTimeout: 10m 5 restic: enable: true 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp 8 default: true credential: key: cloud name: <default_secret> 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1", "oc get backupStorageLocations", "NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31m", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s labelSelector: 5 matchLabels: app=<label_1> app=<label_2> app=<label_3> orLabelSelectors: 6 - matchLabels: app=<label_1> app=<label_2> app=<label_3>", "oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}'", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" driver: <csi_driver> deletionPolicy: Retain", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToRestic: true 1", "apiVersion: v1 kind: Secret metadata: name: <secret_name> namespace: openshift-adp type: Opaque stringData: RESTIC_PASSWORD: <secure_restic_password>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample namespace: openshift-adp spec: features: dataMover: enable: true credentialName: <secret_name> 1 backupLocations: - velero: config: profile: default region: us-east-1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: <bucket_prefix> provider: aws configuration: restic: enable: <true_or_false> velero: defaultPlugins: - openshift - aws - csi", "apiVersion: datamover.oadp.openshift.io/v1alpha1 kind: VolumeSnapshotBackup metadata: name: <vsb_name> namespace: <namespace_name> 1 spec: volumeSnapshotContent: name: <snapcontent_name> protectedNamespace: <adp_namespace> resticSecretRef: name: <restic_secret_name>", "apiVersion: datamover.oadp.openshift.io/v1alpha1 kind: VolumeSnapshotRestore metadata: name: <vsr_name> namespace: <namespace_name> 1 spec: protectedNamespace: <protected_ns> 2 resticSecretRef: name: <restic_secret_name> volumeSnapshotMoverBackupRef: sourcePVCData: name: <source_pvc_name> size: <source_pvc_size> resticrepository: <your_restic_repo> volumeSnapshotClassName: <vsclass_name>", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> namespace: <protected_ns> 1 spec: includedNamespaces: - <app_ns> storageLocation: velero-sample-1", "oc get vsb -n <app_ns>", "oc get vsb <vsb_name> -n <app_ns> -o jsonpath=\"{.status.phase}\"", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> namespace: <protected_ns> spec: backupName: <previous_backup_name> restorePVs: true", "oc get vsr -n <app_ns>", "oc get vsr <vsr_name> -n <app_ns> -o jsonpath=\"{.status.phase}\"", "oc delete vsb -n <app_namespace> --all", "oc delete vsr -n <app_namespace> --all", "oc delete volumesnapshotcontent --all", "oc delete vsb -n <app_namespace> --all", "oc delete volumesnapshot -A --all", "oc delete volumesnapshotcontent --all", "oc delete pvc -n <protected_namespace> --all", "oc delete replicationsource -n <protected_namespace> --all", "oc delete vsr -n <app-ns> --all", "oc delete volumesnapshot -A --all", "oc delete volumesnapshotcontent --all", "oc delete replicationdestination -n <protected_namespace> --all", "apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11", "oc get backupStorageLocations", "NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31m", "cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToRestic: true 4 ttl: 720h0m0s EOF", "oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'", "oc delete backup <backup_CR_name> -n <velero_namespace>", "velero backup delete <backup_CR_name> -n <velero_namespace>", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3", "oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'", "oc get all -n <namespace> 1", "bash dc-restic-post-restore.sh <restore-name>", "#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } OADP_NAMESPACE=USD{OADP_NAMESPACE:=openshift-adp} if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo using OADP Namespace USDOADP_NAMESPACE echo restore: USD1 label=USD(label_name USD1) echo label: USDlabel echo Deleting disconnected restore pods delete pods -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done", "apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9", "alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'", "oc describe <velero_cr> <cr_name>", "oc logs pod/<velero>", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: requests: cpu: 500m memory: 256Mi", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: requests: cpu: 500m memory: 256Mi", "requests: cpu: 500m memory: 128Mi", "velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev", "oc get mutatingwebhookconfigurations", "[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>", "oc delete backup <backup> -n openshift-adp", "time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq", "oc delete backup <backup> -n openshift-adp", "oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true", "spec: configuration: restic: enable: true supplementalGroups: - <group_id> 1", "oc delete resticrepository openshift-adp <name_of_the_restic_repository>", "time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds", "oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.1", "oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.1 -- /usr/bin/gather_metrics_dump", "tar -xvzf must-gather/metrics/prom_data.tar.gz", "make prometheus-run", "Started Prometheus on http://localhost:9090", "make prometheus-cleanup", "oc api-resources", "apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions", "velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>", "oc debug node/<node_name>", "sh-4.2# chroot /host", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"EtcdMembersAvailable\")]}{.message}{\"\\n\"}'", "2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy", "oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{\"\\t\"}{@.status.providerStatus.instanceState}{\"\\n\"}' | grep -v running", "ip-10-0-131-183.ec2.internal stopped 1", "oc get nodes -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\"\\t\"}{range .spec.taints[*]}{.key}{\" \"}' | grep unreachable", "ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1", "oc get nodes -l node-role.kubernetes.io/master | grep \"NotReady\"", "ip-10-0-131-183.ec2.internal NotReady master 122m v1.23.0 1", "oc get nodes -l node-role.kubernetes.io/master", "NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.23.0 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.23.0 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.23.0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "sh-4.2# etcdctl member remove 6fc1e7c9db35841d", "Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1", "etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m", "oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc get machine clustername-8qw5l-master-0 \\ 1 -n openshift-machine-api -o yaml > new-master-machine.yaml", "status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: \"2020-04-20T17:44:29Z\" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: \"2020-04-20T16:53:50Z\" lastTransitionTime: \"2020-04-20T16:53:50Z\" message: machine successfully created reason: MachineCreationSucceeded status: \"True\" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: clustername-8qw5l-master-3", "providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc apply -f new-master-machine.yaml", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc debug node/ip-10-0-131-183.ec2.internal 1", "sh-4.2# chroot /host", "sh-4.2# mkdir /var/lib/etcd-backup", "sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/", "sh-4.2# mv /var/lib/etcd/ /tmp", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "sh-4.2# etcdctl member remove 62bcf33650a7170a", "Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1", "etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m", "oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"single-master-recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl endpoint health", "https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>", "oc rsh -n openshift-etcd etcd-openshift-control-plane-0", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+", "sh-4.2# etcdctl member remove 7a8197040a5126c8", "Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc get secrets -n openshift-etcd | grep openshift-control-plane-2", "etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m", "oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret \"etcd-peer-openshift-control-plane-2\" deleted", "oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-metrics-openshift-control-plane-2\" deleted", "oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-openshift-control-plane-2\" deleted", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get machine examplecluster-control-plane-2 \\ 1 -n openshift-machine-api -o yaml > new-master-machine.yaml", "status: addresses: - address: \"\" type: InternalIP - address: fe80::4adf:37ff:feb0:8aa1%ens1f1.373 type: InternalDNS - address: fe80::4adf:37ff:feb0:8aa1%ens1f1.371 type: Hostname lastUpdated: \"2020-04-20T17:44:29Z\" nodeRef: kind: Machine name: fe80::4adf:37ff:feb0:8aa1%ens1f1.372 uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: machine.openshift.io/v1beta1 conditions: - lastProbeTime: \"2020-04-20T16:53:50Z\" lastTransitionTime: \"2020-04-20T16:53:50Z\" message: machine successfully created reason: MachineCreationSucceeded status: \"True\" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: Machine", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: examplecluster-control-plane-3", "providerID: baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135", "annotations: machine.openshift.io/instance-state: externally provisioned generation: 2", "lastTransitionTime: \"2022-08-03T08:40:36Z\" message: 'Drain operation currently blocked by: [{Name:EtcdQuorumOperator Owner:clusteroperator/etcd}]' reason: HookPresent severity: Warning status: \"False\" type: Drainable lastTransitionTime: \"2022-08-03T08:39:55Z\" status: \"True\" type: InstanceExists lastTransitionTime: \"2022-08-03T08:36:37Z\" status: \"True\" type: Terminable lastUpdated: \"2022-08-03T08:40:36Z\" nodeRef: kind: Node name: openshift-control-plane-2 uid: 788df282-6507-4ea2-9a43-24f237ccbc3c phase: Running", "oc get clusteroperator baremetal", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.10.x True False False 3d15h", "oc delete bmh openshift-control-plane-2 -n openshift-machine-api", "baremetalhost.metal3.io \"openshift-control-plane-2\" deleted", "oc delete machine -n openshift-machine-api examplecluster-control-plane-2", "oc edit machine -n openshift-machine-api examplecluster-control-plane-2", "finalizers: - machine.machine.openshift.io", "machine.machine.openshift.io/examplecluster-control-plane-2 edited", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.24.0+9546431 openshift-control-plane-1 Ready master 3h24m v1.24.0+9546431 openshift-compute-0 Ready worker 176m v1.24.0+9546431 openshift-compute-1 Ready worker 176m v1.24.0+9546431", "cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/sda userData: name: master-user-data-managed namespace: openshift-machine-api EOF", "oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m", "oc apply -f new-master-machine.yaml", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get bmh -n openshift-machine-api", "oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m", "oc get nodes", "oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.24.0+9546431 openshift-control-plane-1 Ready master 4h26m v1.24.0+9546431 openshift-control-plane-2 Ready master 12m v1.24.0+9546431 openshift-compute-0 Ready worker 3h58m v1.24.0+9546431 openshift-compute-1 Ready worker 3h58m v1.24.0+9546431", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc rsh -n openshift-etcd etcd-openshift-control-plane-0", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+", "etcdctl endpoint health --cluster", "https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms", "oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision", "sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | grep -v operator", "sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | grep -v operator", "sudo mv /var/lib/etcd/ /tmp", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edf", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "sudo rm -f /var/lib/ovn/etc/*.db", "oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s", "oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done", "oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc get machine clustername-8qw5l-master-0 \\ 1 -n openshift-machine-api -o yaml > new-master-machine.yaml", "status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: \"2020-04-20T17:44:29Z\" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: \"2020-04-20T16:53:50Z\" lastTransitionTime: \"2020-04-20T16:53:50Z\" message: machine successfully created reason: MachineCreationSucceeded status: \"True\" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: clustername-8qw5l-master-3", "providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f", "annotations: machine.openshift.io/instance-state: running generation: 2", "resourceVersion: \"13291\" uid: a282eb70-40a2-4e89-8009-d05dd420d31a", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc apply -f new-master-machine.yaml", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "export KUBECONFIG=<installation_directory>/auth/kubeconfig", "oc whoami", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/backup_and_restore/index
Chapter 3. Configuring certificates
Chapter 3. Configuring certificates 3.1. Replacing the default ingress certificate 3.1.1. Understanding the default ingress certificate By default, OpenShift Container Platform uses the Ingress Operator to create an internal CA and issue a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The internal infrastructure CA certificates are self-signed. While this process might be perceived as bad practice by some security or PKI teams, any risk here is minimal. The only clients that implicitly trust these certificates are other components within the cluster. Replacing the default wildcard certificate with one that is issued by a public CA already included in the CA bundle as provided by the container userspace allows external clients to connect securely to applications running under the .apps sub-domain. 3.1.2. Replacing the default ingress certificate You can replace the default ingress certificate for all applications under the .apps subdomain. After you replace the certificate, all applications, including the web console and CLI, will have encryption provided by specified certificate. Prerequisites You must have a wildcard certificate for the fully qualified .apps subdomain and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing *.apps.<clustername>.<domain> . The certificate file can contain one or more certificates in a chain. The wildcard certificate must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Copy the root CA certificate into an additional PEM format file. Verify that all certificates which include -----END CERTIFICATE----- also end with one carriage return after that line. Important Updating the certificate authority (CA) causes the nodes in your cluster to reboot. Procedure Create a config map that includes only the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the root CA certificate file on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Create a secret that contains the wildcard certificate chain and key: USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-ingress 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the Ingress Controller configuration with the newly created secret: USD oc patch ingresscontroller.operator default \ --type=merge -p \ '{"spec":{"defaultCertificate": {"name": "<secret>"}}}' \ 1 -n openshift-ingress-operator 1 Replace <secret> with the name used for the secret in the step. Important To trigger the Ingress Operator to perform a rolling update, you must update the name of the secret. Because the kubelet automatically propagates changes to the secret in the volume mount, updating the secret contents does not trigger a rolling update. For more information, see this Red Hat Knowledgebase Solution . Additional resources Replacing the CA Bundle certificate Proxy certificate customization 3.2. Adding API server certificates The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. Clients outside of the cluster will not be able to verify the API server's certificate by default. This certificate can be replaced by one that is issued by a CA that clients trust. 3.2.1. Add an API server named certificate The default API server certificate is issued by an internal OpenShift Container Platform cluster CA. You can add one or more alternative certificates that the API server will return based on the fully qualified domain name (FQDN) requested by the client, for example when a reverse proxy or load balancer is used. Prerequisites You must have a certificate for the FQDN and its corresponding private key. Each should be in a separate PEM format file. The private key must be unencrypted. If your key is encrypted, decrypt it before importing it into OpenShift Container Platform. The certificate must include the subjectAltName extension showing the FQDN. The certificate file can contain one or more certificates in a chain. The certificate for the API server FQDN must be the first certificate in the file. It can then be followed with any intermediate certificates, and the file should end with the root CA certificate. Warning Do not provide a named certificate for the internal load balancer (host name api-int.<cluster_name>.<base_domain> ). Doing so will leave your cluster in a degraded state. Procedure Login to the new API as the kubeadmin user. USD oc login -u kubeadmin -p <password> https://FQDN:6443 Get the kubeconfig file. USD oc config view --flatten > kubeconfig-newapi Create a secret that contains the certificate chain and private key in the openshift-config namespace. USD oc create secret tls <secret> \ 1 --cert=</path/to/cert.crt> \ 2 --key=</path/to/cert.key> \ 3 -n openshift-config 1 <secret> is the name of the secret that will contain the certificate chain and private key. 2 </path/to/cert.crt> is the path to the certificate chain on your local file system. 3 </path/to/cert.key> is the path to the private key associated with this certificate. Update the API server to reference the created secret. USD oc patch apiserver cluster \ --type=merge -p \ '{"spec":{"servingCerts": {"namedCertificates": [{"names": ["<FQDN>"], 1 "servingCertificate": {"name": "<secret>"}}]}}}' 2 1 Replace <FQDN> with the FQDN that the API server should provide the certificate for. Do not include the port number. 2 Replace <secret> with the name used for the secret in the step. Examine the apiserver/cluster object and confirm the secret is now referenced. USD oc get apiserver cluster -o yaml Example output ... spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret> ... Check the kube-apiserver operator, and verify that a new revision of the Kubernetes API server rolls out. It may take a minute for the operator to detect the configuration change and trigger a new deployment. While the new revision is rolling out, PROGRESSING will report True . USD oc get clusteroperators kube-apiserver Do not continue to the step until PROGRESSING is listed as False , as shown in the following output: Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.13.0 True False False 145m If PROGRESSING is showing True , wait a few minutes and try again. Note A new revision of the Kubernetes API server only rolls out if the API server named certificate is added for the first time. When the API server named certificate is renewed, a new revision of the Kubernetes API server does not roll out because the kube-apiserver pods dynamically reload the updated certificate. 3.3. Securing service traffic using service serving certificate secrets 3.3.1. Understanding service serving certificates Service serving certificates are intended to support complex middleware applications that require encryption. These certificates are issued as TLS web server certificates. The service-ca controller uses the x509.SHA256WithRSA signature algorithm to generate service certificates. The generated certificate and key are in PEM format, stored in tls.crt and tls.key respectively, within a created secret. The certificate and key are automatically replaced when they get close to expiration. The service CA certificate, which issues the service certificates, is valid for 26 months and is automatically rotated when there is less than 13 months validity left. After rotation, the service CA configuration is still trusted until its expiration. This allows a grace period for all affected services to refresh their key material before the expiration. If you do not upgrade your cluster during this grace period, which restarts services and refreshes their key material, you might need to manually restart services to avoid failures after the service CA expires. Note You can use the following command to manually restart all pods in the cluster. Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done 3.3.2. Add a service certificate To secure communication to your service, generate a signed serving certificate and key pair into a secret in the same namespace as the service. The generated certificate is only valid for the internal service DNS name <service.name>.<service.namespace>.svc , and is only valid for internal communications. If your service is a headless service (no clusterIP value set), the generated certificate also contains a wildcard subject in the format of *.<service.name>.<service.namespace>.svc . Important Because the generated certificates contain wildcard subjects for headless services, you must not use the service CA if your client must differentiate between individual pods. In this case: Generate individual TLS certificates by using a different CA. Do not accept the service CA as a trusted CA for connections that are directed to individual pods and must not be impersonated by other pods. These connections must be configured to trust the CA that was used to generate the individual TLS certificates. Prerequisites You must have a service defined. Procedure Annotate the service with service.beta.openshift.io/serving-cert-secret-name : USD oc annotate service <service_name> \ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2 1 Replace <service_name> with the name of the service to secure. 2 <secret_name> will be the name of the generated secret containing the certificate and key pair. For convenience, it is recommended that this be the same as <service_name> . For example, use the following command to annotate the service test1 : USD oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1 Examine the service to confirm that the annotations are present: USD oc describe service <service_name> Example output ... Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837 ... After the cluster generates a secret for your service, your Pod spec can mount it, and the pod will run after it becomes available. Additional resources You can use a service certificate to configure a secure route using reencrypt TLS termination. For more information, see Creating a re-encrypt route with a custom certificate . 3.3.3. Add the service CA bundle to a config map A pod can access the service CA certificate by mounting a ConfigMap object that is annotated with service.beta.openshift.io/inject-cabundle=true . Once annotated, the cluster automatically injects the service CA certificate into the service-ca.crt key on the config map. Access to this CA certificate allows TLS clients to verify connections to services using service serving certificates. Important After adding this annotation to a config map all existing data in it is deleted. It is recommended to use a separate config map to contain the service-ca.crt , instead of using the same config map that stores your pod configuration. Procedure Annotate the config map with service.beta.openshift.io/inject-cabundle=true : USD oc annotate configmap <config_map_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <config_map_name> with the name of the config map to annotate. Note Explicitly referencing the service-ca.crt key in a volume mount will prevent a pod from starting until the config map has been injected with the CA bundle. This behavior can be overridden by setting the optional field to true for the volume's serving certificate configuration. For example, use the following command to annotate the config map test1 : USD oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true View the config map to ensure that the service CA bundle has been injected: USD oc get configmap <config_map_name> -o yaml The CA bundle is displayed as the value of the service-ca.crt key in the YAML output: apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE----- ... 3.3.4. Add the service CA bundle to an API service You can annotate an APIService object with service.beta.openshift.io/inject-cabundle=true to have its spec.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Procedure Annotate the API service with service.beta.openshift.io/inject-cabundle=true : USD oc annotate apiservice <api_service_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <api_service_name> with the name of the API service to annotate. For example, use the following command to annotate the API service test1 : USD oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true View the API service to ensure that the service CA bundle has been injected: USD oc get apiservice <api_service_name> -o yaml The CA bundle is displayed in the spec.caBundle field in the YAML output: apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: caBundle: <CA_BUNDLE> ... 3.3.5. Add the service CA bundle to a custom resource definition You can annotate a CustomResourceDefinition (CRD) object with service.beta.openshift.io/inject-cabundle=true to have its spec.conversion.webhook.clientConfig.caBundle field populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note The service CA bundle will only be injected into the CRD if the CRD is configured to use a webhook for conversion. It is only useful to inject the service CA bundle if a CRD's webhook is secured with a service CA certificate. Procedure Annotate the CRD with service.beta.openshift.io/inject-cabundle=true : USD oc annotate crd <crd_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <crd_name> with the name of the CRD to annotate. For example, use the following command to annotate the CRD test1 : USD oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true View the CRD to ensure that the service CA bundle has been injected: USD oc get crd <crd_name> -o yaml The CA bundle is displayed in the spec.conversion.webhook.clientConfig.caBundle field in the YAML output: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE> ... 3.3.6. Add the service CA bundle to a mutating webhook configuration You can annotate a MutatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the mutating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <mutating_webhook_name> with the name of the mutating webhook configuration to annotate. For example, use the following command to annotate the mutating webhook configuration test1 : USD oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the mutating webhook configuration to ensure that the service CA bundle has been injected: USD oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.7. Add the service CA bundle to a validating webhook configuration You can annotate a ValidatingWebhookConfiguration object with service.beta.openshift.io/inject-cabundle=true to have the clientConfig.caBundle field of each webhook populated with the service CA bundle. This allows the Kubernetes API server to validate the service CA certificate used to secure the targeted endpoint. Note Do not set this annotation for admission webhook configurations that need to specify different CA bundles for different webhooks. If you do, then the service CA bundle will be injected for all webhooks. Procedure Annotate the validating webhook configuration with service.beta.openshift.io/inject-cabundle=true : USD oc annotate validatingwebhookconfigurations <validating_webhook_name> \ 1 service.beta.openshift.io/inject-cabundle=true 1 Replace <validating_webhook_name> with the name of the validating webhook configuration to annotate. For example, use the following command to annotate the validating webhook configuration test1 : USD oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true View the validating webhook configuration to ensure that the service CA bundle has been injected: USD oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml The CA bundle is displayed in the clientConfig.caBundle field of all webhooks in the YAML output: apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: "true" ... webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE> ... 3.3.8. Manually rotate the generated service certificate You can rotate the service certificate by deleting the associated secret. Deleting the secret results in a new one being automatically created, resulting in a new certificate. Prerequisites A secret containing the certificate and key pair must have been generated for the service. Procedure Examine the service to determine the secret containing the certificate. This is found in the serving-cert-secret-name annotation, as seen below. USD oc describe service <service_name> Example output ... service.beta.openshift.io/serving-cert-secret-name: <secret> ... Delete the generated secret for the service. This process will automatically recreate the secret. USD oc delete secret <secret> 1 1 Replace <secret> with the name of the secret from the step. Confirm that the certificate has been recreated by obtaining the new secret and examining the AGE . USD oc get secret <service_name> Example output NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s 3.3.9. Manually rotate the service CA certificate The service CA is valid for 26 months and is automatically refreshed when there is less than 13 months validity left. If necessary, you can manually refresh the service CA by using the following procedure. Warning A manually-rotated service CA does not maintain trust with the service CA. You might experience a temporary service disruption until the pods in the cluster are restarted, which ensures that pods are using service serving certificates issued by the new service CA. Prerequisites You must be logged in as a cluster admin. Procedure View the expiration date of the current service CA certificate by using the following command. USD oc get secrets/signing-key -n openshift-service-ca \ -o template='{{index .data "tls.crt"}}' \ | base64 --decode \ | openssl x509 -noout -enddate Manually rotate the service CA. This process generates a new service CA which will be used to sign the new service certificates. USD oc delete secret/signing-key -n openshift-service-ca To apply the new certificates to all services, restart all the pods in your cluster. This command ensures that all services use the updated certificates. USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Warning This command will cause a service interruption, as it goes through and deletes every running pod in every namespace. These pods will automatically restart after they are deleted. 3.4. Updating the CA bundle Important Updating the certificate authority (CA) will cause the nodes of your cluster to reboot. 3.4.1. Understanding the CA Bundle certificate Proxy certificates allow users to specify one or more custom certificate authority (CA) used by platform components when making egress connections. The trustedCA field of the Proxy object is a reference to a config map that contains a user-provided trusted certificate authority (CA) bundle. This bundle is merged with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle and injected into the trust store of platform components that make egress HTTPS calls. For example, image-registry-operator calls an external image registry to download images. If trustedCA is not specified, only the RHCOS trust bundle is used for proxied HTTPS connections. Provide custom CA certificates to the RHCOS trust bundle if you want to use your own certificate infrastructure. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from required key ca-bundle.crt and copying it to a config map named trusted-ca-bundle in the openshift-config-managed namespace. The namespace for the config map referenced by trustedCA is openshift-config : apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 3.4.2. Replacing the CA Bundle certificate Procedure Create a config map that includes the root CA certificate used to sign the wildcard certificate: USD oc create configmap custom-ca \ --from-file=ca-bundle.crt=</path/to/example-ca.crt> \ 1 -n openshift-config 1 </path/to/example-ca.crt> is the path to the CA certificate bundle on your local file system. Update the cluster-wide proxy configuration with the newly created config map: USD oc patch proxy/cluster \ --type=merge \ --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' Additional resources Replacing the default ingress certificate Enabling the cluster-wide proxy Proxy certificate customization
[ "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress", "oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator", "oc login -u kubeadmin -p <password> https://FQDN:6443", "oc config view --flatten > kubeconfig-newapi", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config", "oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2", "oc get apiserver cluster -o yaml", "spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.13.0 True False False 145m", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2", "oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1", "oc describe service <service_name>", "Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837", "oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true", "oc get configmap <config_map_name> -o yaml", "apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----", "oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true", "oc get apiservice <api_service_name> -o yaml", "apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>", "oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true", "oc get crd <crd_name> -o yaml", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>", "oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc describe service <service_name>", "service.beta.openshift.io/serving-cert-secret-name: <secret>", "oc delete secret <secret> 1", "oc get secret <service_name>", "NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s", "oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate", "oc delete secret/signing-key -n openshift-service-ca", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----", "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/security_and_compliance/configuring-certificates
8.4. Understanding the Network Teaming Daemon and the "Runners"
8.4. Understanding the Network Teaming Daemon and the "Runners" The Team daemon, teamd , uses libteam to control one instance of the team driver. This instance of the team driver adds instances of a hardware device driver to form a " team " of network links. The team driver presents a network interface, team0 for example, to the other parts of the kernel. The interfaces created by instances of the team driver are given names such as team0 , team1 , and so forth in the documentation. This is for ease of understanding and other names can be used. The logic common to all methods of teaming is implemented by teamd ; those functions that are unique to the different load sharing and backup methods, such as round-robin, are implemented by separate units of code referred to as " runners " . Because words such as " module " and " mode " already have specific meanings in relation to the kernel, the word " runner " was chosen to refer to these units of code. The user specifies the runner in the JSON format configuration file and the code is then compiled into an instance of teamd when the instance is created. A runner is not a plug-in because the code for a runner is compiled into an instance of teamd as it is being created. Code could be created as a plug-in for teamd should the need arise. The following runners are available at time of writing. broadcast (data is transmitted over all ports) round-robin (data is transmitted over all ports in turn) active-backup (one port or link is used while others are kept as a backup) loadbalance (with active Tx load balancing and BPF-based Tx port selectors) lacp (implements the 802.3ad Link Aggregation Control Protocol) In addition, the following link-watchers are available: ethtool (Libteam lib uses ethtool to watch for link state changes). This is the default if no other link-watcher is specified in the configuration file. arp_ping (The arp_ping utility is used to monitor the presence of a far-end hardware address using ARP packets.) nsna_ping (Neighbor Advertisements and Neighbor Solicitation from the IPv6 Neighbor Discovery protocol are used to monitor the presence of a neighbor's interface) There are no restrictions in the code to prevent a particular link-watcher from being used with a particular runner, however when using the lacp runner, ethtool is the only recommended link-watcher.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Understanding_the_Network_Teaming_Daemon_and_the_Runners
Chapter 1. Managing Bricks
Chapter 1. Managing Bricks 1.1. Creating a brick using the Administration Portal This process creates a new thinly provisioned logical volume on a specified storage device, for use as a brick in a Gluster volume. Log in to the Administration Portal. Click Compute Hosts and select the host for the brick. Click Storage Devices . If no storage devices are visible, try synchronizing the volume: Section 2.9, "Synchronizing volume state using the Administration Portal" . Select a storage device and click Create Brick . The Create Brick window appears. Specify the Brick Name . Specify the Mount Point for the brick. (Optional) To create a RAID array, specify the following: No. of physical disks in the RAID array RAID Type (Optional) To assign a logical volume cache for this brick, specify a Device under Cache Device . This is recommended when your main storage device is not a solid state disk. Click OK . 1.2. Resetting a brick using the Administration Portal Resetting a brick lets you reconfigure a brick as though you are adding it to the cluster for the first time, using the same UUID, hostname, and path. See the Red Hat Gluster Storage Administration Guide for more information. Log in to the Administration Portal. Click Storage Volumes . Click the name of the volume that runs on the brick you want to reset. Click Bricks . Click Reset Brick . The Reset Brick window opens. Click OK to confirm the operation. 1.3. Replacing a brick using the Administration Portal Log in to the Administration Portal. Click Storage Volumes . Click the name of the volume that runs on the brick you want to reset. Click Bricks . Click Replace Brick . The Replace Brick window opens. Select the Host of the replacement brick. Select the Brick Directory of the replacement brick. Click OK . 1.4. Deleting a brick using the Administration Portal Log in to the Administration Portal. Click Storage Volumes . Select the volume you want to delete. Click Stop and confirm that the volume should be stopped. Click Bricks . Select the brick to remove. Click Remove and confirm that the brick should be removed.
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/managing_red_hat_gluster_storage_using_rhv_administration_portal/rhv-gluster-brick-mgmt
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.21/proc-providing-feedback-on-redhat-documentation
Chapter 3. LVM Administration Overview
Chapter 3. LVM Administration Overview This chapter provides an overview of the administrative procedures you use to configure LVM logical volumes. This chapter is intended to provide a general understanding of the steps involved. For specific step-by-step examples of common LVM configuration procedures, see Chapter 5, LVM Configuration Examples . For descriptions of the CLI commands you can use to perform LVM administration, see Chapter 4, LVM Administration with CLI Commands . 3.1. Logical Volume Creation Overview The following is a summary of the steps to perform to create an LVM logical volume. Initialize the partitions you will use for the LVM volume as physical volumes (this labels them). Create a volume group. Create a logical volume. After creating the logical volume you can create and mount the file system. The examples in this document use GFS2 file systems. Create a GFS2 file system on the logical volume with the mkfs.gfs2 command. Create a new mount point with the mkdir command. In a clustered system, create the mount point on all nodes in the cluster. Mount the file system. You may want to add a line to the fstab file for each node in the system. Note Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the Red Hat Enterprise Linux 7 release Red Hat does not support the use of GFS2 as a single-node file system. Red Hat will continue to support single-node GFS2 file systems for mounting snapshots of cluster file systems (for example, for backup purposes). Creating the LVM volume is machine independent, since the storage area for LVM setup information is on the physical volumes and not the machine where the volume was created. Servers that use the storage have local copies, but can recreate that from what is on the physical volumes. You can attach physical volumes to a different server if the LVM versions are compatible.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/lvm_administration
function::user_string_n_quoted
function::user_string_n_quoted Name function::user_string_n_quoted - Retrieves and quotes string from user space. Synopsis Arguments addr The user space address to retrieve the string from. n The maximum length of the string (if not null terminated). General Syntax user_string_n_quoted:string(addr:long, n:long) Description Returns up to n characters of a C string from the given user space memory address where any ASCII characters that are not printable are replaced by the corresponding escape sequence in the returned string. Reports " NULL " for address zero. Returns " <unknown> " on the rare cases when userspace data is not accessible at the given address.
[ "function user_string_n_quoted:string(addr:long,n:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-string-n-quoted
Chapter 1. Overview of machine management
Chapter 1. Overview of machine management You can use machine management to flexibly work with underlying infrastructure like Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), OpenStack, Red Hat Virtualization (RHV), and vSphere to manage the OpenShift Container Platform cluster. You can control the cluster and perform auto-scaling, such as scaling up and down the cluster based on specific workload policies. The OpenShift Container Platform cluster can horizontally scale up and down when the load increases or decreases. It is important to have a cluster that adapts to changing workloads. Machine management is implemented as a Custom Resource Definition (CRD). A CRD object defines a new unique object Kind in the cluster and enables the Kubernetes API server to handle the object's entire lifecycle. The Machine API Operator provisions the following resources: MachineSet Machine Cluster Autoscaler Machine Autoscaler Machine Health Checks 1.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.10 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.10 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of machines. Machine sets are to machines as replica sets are to pods. If you need more machines or must scale them down, you change the replicas field on the machine set to meet your compute need. Warning Control plane machines cannot be managed by machine sets. The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the machine set API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the scaling policy so that you can scale up nodes but not scale them down. Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each machine set is scoped to a single zone, so the installation program sends out machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. 1.2. Managing compute machines As a cluster administrator you can: Create a machine set on: AWS Azure GCP OpenStack RHV vSphere Create a machine set for a bare metal deployment: Creating a compute machine set on bare metal Manually scale a machine set by adding or removing a machine from the machine set. Modify a machine set through the MachineSet YAML configuration file. Delete a machine. Create infrastructure machine sets . Configure and deploy a machine health check to automatically fix damaged machines in a machine pool. 1.3. Applying autoscaling to an OpenShift Container Platform cluster You can automatically scale your OpenShift Container Platform cluster to ensure flexibility for changing workloads. To autoscale your cluster, you must first deploy a cluster autoscaler, and then deploy a machine autoscaler for each compute machine set. The cluster autoscaler increases and decreases the size of the cluster based on deployment needs. The machine autoscaler adjusts the number of machines in the compute machine sets that you deploy in your OpenShift Container Platform cluster. 1.4. Adding compute machines on user-provisioned infrastructure User-provisioned infrastructure is an environment where you can deploy infrastructure such as compute, network, and storage resources that host the OpenShift Container Platform. You can add compute machines to a cluster on user-provisioned infrastructure during or after the installation process. 1.5. Adding RHEL compute machines to your cluster As a cluster administrator, you can perform the following actions: Add Red Hat Enterprise Linux (RHEL) compute machines , also known as worker machines, to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster. Add more Red Hat Enterprise Linux (RHEL) compute machines to an existing cluster.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/machine_management/overview-of-machine-management
7.5. RHEA-2014:1433 - new package: google-crosextra-caladea-fonts
7.5. RHEA-2014:1433 - new package: google-crosextra-caladea-fonts A new google-crosextra-caladea-fonts package is now available for Red Hat Enterprise Linux 6. The Caladea font family is metric-compatible with the Cambria font. Caladea is a serif typeface family based on the Lato font. This enhancement update adds the google-crosextra-caladea-fonts package to Red Hat Enterprise Linux 6. (BZ# 1025629 ) All users who require google-crosextra-caladea-fonts are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1433
Chapter 17. Creating a dummy interface
Chapter 17. Creating a dummy interface As a Red Hat Enterprise Linux user, you can create and use dummy network interfaces for debugging and testing purposes. A dummy interface provides a device to route packets without actually transmitting them. It enables you to create additional loopback-like devices managed by NetworkManager and makes an inactive SLIP (Serial Line Internet Protocol) address look like a real address for local programs. 17.1. Creating a dummy interface with both an IPv4 and IPv6 address by using nmcli You can create a dummy interface with various settings, such as IPv4 and IPv6 addresses. After creating the interface, NetworkManager automatically assigns it to the default public firewalld zone. Procedure Create a dummy interface named dummy0 with static IPv4 and IPv6 addresses: Note To configure a dummy interface without IPv4 and IPv6 addresses, set both the ipv4.method and ipv6.method parameters to disabled . Otherwise, IP auto-configuration fails, and NetworkManager deactivates the connection and removes the device. Verification List the connection profiles: Additional resources nm-settings(5) man page on your system
[ "nmcli connection add type dummy ifname dummy0 ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv6.method manual ipv6.addresses 2001:db8:2::1/64", "nmcli connection show NAME UUID TYPE DEVICE dummy-dummy0 aaf6eb56-73e5-4746-9037-eed42caa8a65 dummy dummy0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/creating-a-dummy-interface_configuring-and-managing-networking
Chapter 4. Using quotas and limit ranges
Chapter 4. Using quotas and limit ranges A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project. Using quotas and limit ranges, cluster administrators can set constraints to limit the number of objects or amount of compute resources that are used in your project. This helps cluster administrators better manage and allocate resources across all projects, and ensure that no projects are using more than is appropriate for the cluster size. Important Quotas are set by cluster administrators and are scoped to a given project. OpenShift Container Platform project owners can change quotas for their project, but not limit ranges. OpenShift Container Platform users cannot modify quotas or limit ranges. The following sections help you understand how to check on your quota and limit range settings, what sorts of things they can constrain, and how you can request or limit compute resources in your own pods and containers. 4.1. Resources managed by quota A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project. The following describes the set of compute resources and object types that may be managed by a quota. Note A pod is in a terminal state if status.phase is Failed or Succeeded . Table 4.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. Table 4.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. Table 4.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of replication controllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of image streams that can exist in the project. You can configure an object count quota for these standard namespaced resource types using the count/<resource>.<group> syntax. USD oc create quota <name> --hard=count/<resource>.<group>=<quota> 1 1 1 <resource> is the name of the resource, and <group> is the API group, if applicable. Use the kubectl api-resources command for a list of resources and their associated API groups. 4.1.1. Setting resource quota for extended resources Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. are allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu . Procedure To determine how many GPUs are available on a node in your cluster, use the following command: USD oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu' Example output openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0 In this example, 2 GPUs are available. Use this command to set a quota in the namespace nvidia . In this example, the quota is 1 : USD cat gpu-quota.yaml Example output apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 Create the quota with the following command: USD oc create -f gpu-quota.yaml Example output resourcequota/gpu-quota created Verify that the namespace has the correct quota set using the following command: USD oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 Run a pod that asks for a single GPU with the following command: USD oc create pod gpu-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 Verify that the pod is running bwith the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m Verify that the quota Used counter is correct by running the following command: USD oc describe quota gpu-quota -n nvidia Example output Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 Using the following command, attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs: USD oc create -f gpu-pod.yaml Example output Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1 This Forbidden error message occurs because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota. 4.1.2. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description Terminating Match pods where spec.activeDeadlineSeconds >= 0 . NotTerminating Match pods where spec.activeDeadlineSeconds is nil . BestEffort Match pods that have best effort quality of service for either cpu or memory . otBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A Terminating , NotTerminating , and NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu ephemeral-storage requests.ephemeral-storage limits.ephemeral-storage Note Ephemeral storage requests and limits apply only if you enabled the ephemeral storage technology preview. This feature is disabled by default. Additional resources See Resources managed by quotas for more on compute resources. See Quality of Service Classes for more on committing compute resources. 4.2. Admin quota usage 4.2.1. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that can violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage stats are in the system. 4.2.2. Requests compared to limits When allocating compute resources by quota, each container can specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. 4.2.3. Sample resource quota definitions Example core-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "10" 1 persistentvolumeclaims: "4" 2 replicationcontrollers: "20" 3 secrets: "10" 4 services: "10" 5 1 The total number of ConfigMap objects that can exist in the project. 2 The total number of persistent volume claims (PVCs) that can exist in the project. 3 The total number of replication controllers that can exist in the project. 4 The total number of secrets that can exist in the project. 5 The total number of services that can exist in the project. Example openshift-object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: "10" 1 1 The total number of image streams that can exist in the project. Example compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: "4" 1 requests.cpu: "1" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: "2" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7 1 The total number of pods in a non-terminal state that can exist in the project. 2 Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core. 3 Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi. 4 Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi. 5 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores. 6 Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi. 7 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi. Example besteffort.yaml apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: "1" 1 scopes: - BestEffort 2 1 The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project. 2 Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU. Example compute-resources-long-running.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: "4" 1 limits.cpu: "4" 2 limits.memory: "2Gi" 3 limits.ephemeral-storage: "4Gi" 4 scopes: - NotTerminating 5 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value. 5 Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil . Build pods will fall under NotTerminating unless the RestartNever policy is applied. Example compute-resources-time-bound.yaml apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: "2" 1 limits.cpu: "1" 2 limits.memory: "1Gi" 3 limits.ephemeral-storage: "1Gi" 4 scopes: - Terminating 5 1 The total number of pods in a non-terminal state. 2 Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value. 3 Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value. 4 Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value. 5 Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0 . For example, this quota would charge for build pods, but not long running pods such as a web server or database. Example storage-consumption.yaml apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: "10" 1 requests.storage: "50Gi" 2 gold.storageclass.storage.k8s.io/requests.storage: "10Gi" 3 silver.storageclass.storage.k8s.io/requests.storage: "20Gi" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" 5 bronze.storageclass.storage.k8s.io/requests.storage: "0" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" 7 1 The total number of persistent volume claims in a project 2 Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value. 3 Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value. 4 Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value. 5 Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value. 6 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot request storage. 7 Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0 , it means bronze storage class cannot create claims. 4.2.4. Creating a quota To create a quota, first define the quota in a file. Then use that file to apply it to a project. See the Additional resources section for a link describing this. USD oc create -f <resource_quota_definition> [-n <project_name>] Here is an example using the core-object-counts.yaml resource quota definition and the demoproject project name: USD oc create -f core-object-counts.yaml -n demoproject 4.2.5. Creating object count quotas You can create an object count quota for all OpenShift Container Platform standard namespaced resource types, such as BuildConfig , and DeploymentConfig . An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota if it exists in server storage. These types of quotas are useful to protect against exhaustion of storage resources. To configure an object count quota for a resource, run the following command: USD oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> Example showing object count quota: USD oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota "test" created USD oc describe quota test Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4 This example limits the listed resources to the hard limit in each project in the cluster. 4.2.6. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details: First, get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 4.2.7. Configuring quota synchronization period When a set of resources are deleted, the synchronization time frame of resources is determined by the resource-quota-sync-period setting in the /etc/origin/master/master-config.yaml file. Before quota usage is restored, a user can encounter problems when attempting to reuse the resources. You can change the resource-quota-sync-period setting to have the set of resources regenerate in the needed amount of time (in seconds) for the resources to be once again available: Example resource-quota-sync-period setting kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: resource-quota-sync-period: - "10s" After making any changes, restart the controller services to apply them. USD master-restart api USD master-restart controllers Adjusting the regeneration time can be helpful for creating resources and determining resource usage when automation is used. Note The resource-quota-sync-period setting balances system performance. Reducing the sync period can result in a heavy load on the controller. 4.2.8. Explicit quota to consume a resource If a resource is not managed by quota, a user has no restriction on the amount of resource that can be consumed. For example, if there is no quota on storage related to the gold storage class, the amount of gold storage a project can create is unbounded. For high-cost compute or storage resources, administrators can require an explicit quota be granted to consume a resource. For example, if a project was not explicitly given quota for storage related to the gold storage class, users of that project would not be able to create any storage of that type. In order to require explicit quota to consume a particular resource, the following stanza should be added to the master-config.yaml. admissionConfig: pluginConfig: ResourceQuota: configuration: apiVersion: resourcequota.admission.k8s.io/v1alpha1 kind: Configuration limitedResources: - resource: persistentvolumeclaims 1 matchContains: - gold.storageclass.storage.k8s.io/requests.storage 2 1 The group or resource to whose consumption is limited by default. 2 The name of the resource tracked by quota associated with the group/resource to limit by default. In the above example, the quota system intercepts every operation that creates or updates a PersistentVolumeClaim . It checks what resources controlled by quota would be consumed. If there is no covering quota for those resources in the project, the request is denied. In this example, if a user creates a PersistentVolumeClaim that uses storage associated with the gold storage class and there is no matching quota in the project, the request is denied. Additional resources For examples of how to create the file needed to set quotas, see Resources managed by quotas . A description of how to allocate compute resources managed by quota . For information on managing limits and quota on project resources, see Working with projects . If a quota has been defined for your project, see Understanding deployments for considerations in cluster configurations. 4.3. Setting limit ranges A limit range, defined by a LimitRange object, defines compute resource constraints at the pod, container, image, image stream, and persistent volume claim level. The limit range specifies the amount of resources that a pod, container, image, image stream, or persistent volume claim can consume. All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. If the resource does not set an explicit value, and if the constraint supports a default value, the default value is applied to the resource. For CPU and memory limits, if you specify a maximum value but do not specify a minimum limit, the resource can consume more CPU and memory resources than the maximum value. Core limit range object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "core-resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 - type: "Container" max: cpu: "2" 6 memory: "1Gi" 7 min: cpu: "100m" 8 memory: "4Mi" 9 default: cpu: "300m" 10 memory: "200Mi" 11 defaultRequest: cpu: "200m" 12 memory: "100Mi" 13 maxLimitRequestRatio: cpu: "10" 14 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request on a node across all containers. 3 The maximum amount of memory that a pod can request on a node across all containers. 4 The minimum amount of CPU that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max CPU value. 5 The minimum amount of memory that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max memory value. 6 The maximum amount of CPU that a single container in a pod can request. 7 The maximum amount of memory that a single container in a pod can request. 8 The minimum amount of CPU that a single container in a pod can request. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max CPU value. 9 The minimum amount of memory that a single container in a pod can request. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more than the max memory value. 10 The default CPU limit for a container if you do not specify a limit in the pod specification. 11 The default memory limit for a container if you do not specify a limit in the pod specification. 12 The default CPU request for a container if you do not specify a request in the pod specification. 13 The default memory request for a container if you do not specify a request in the pod specification. 14 The maximum limit-to-request ratio for a container. OpenShift Container Platform Limit range object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "openshift-resource-limits" spec: limits: - type: openshift.io/Image max: storage: 1Gi 1 - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 - type: "Pod" max: cpu: "2" 4 memory: "1Gi" 5 ephemeral-storage: "1Gi" 6 min: cpu: "1" 7 memory: "1Gi" 8 1 The maximum size of an image that can be pushed to an internal registry. 2 The maximum number of unique image tags as defined in the specification for the image stream. 3 The maximum number of unique image references as defined in the specification for the image stream status. 4 The maximum amount of CPU that a pod can request on a node across all containers. 5 The maximum amount of memory that a pod can request on a node across all containers. 6 The maximum amount of ephemeral storage that a pod can request on a node across all containers. 7 The minimum amount of CPU that a pod can request on a node across all containers. See the Supported Constraints table for important information. 8 The minimum amount of memory that a pod can request on a node across all containers. If you do not set a min value or you set min to 0 , the result` is no limit and the pod can consume more than the max memory value. You can specify both core and OpenShift Container Platform resources in one limit range object. 4.3.1. Container limits Supported Resources: CPU Memory Supported Constraints Per container, the following must hold true if specified: Container Constraint Behavior Min Min[<resource>] less than or equal to container.resources.requests[<resource>] (required) less than or equal to container/resources.limits[<resource>] (optional) If the configuration defines a min CPU, the request value must be greater than the CPU value. If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more of the resource than the max value. Max container.resources.limits[<resource>] (required) less than or equal to Max[<resource>] If the configuration defines a max CPU, you do not need to define a CPU request value. However, you must set a limit that satisfies the maximum CPU constraint that is specified in the limit range. MaxLimitRequestRatio MaxLimitRequestRatio[<resource>] less than or equal to ( container.resources.limits[<resource>] / container.resources.requests[<resource>] ) If the limit range defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. Additionally, OpenShift Container Platform calculates a limit-to-request ratio by dividing the limit by the request . The result should be an integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . Supported Defaults: Default[<resource>] Defaults container.resources.limit[<resource>] to specified value if none. Default Requests[<resource>] Defaults container.resources.requests[<resource>] to specified value if none. 4.3.2. Pod limits Supported Resources: CPU Memory Supported Constraints: Across all containers in a pod, the following must hold true: Table 4.4. Pod Constraint Enforced Behavior Min Min[<resource>] less than or equal to container.resources.requests[<resource>] (required) less than or equal to container.resources.limits[<resource>] . If you do not set a min value or you set min to 0 , the result is no limit and the pod can consume more of the resource than the max value. Max container.resources.limits[<resource>] (required) less than or equal to Max[<resource>] . MaxLimitRequestRatio MaxLimitRequestRatio[<resource>] less than or equal to ( container.resources.limits[<resource>] / container.resources.requests[<resource>] ). 4.3.3. Image limits Supported Resources: Storage Resource type name: openshift.io/Image Per image, the following must hold true if specified: Table 4.5. Image Constraint Behavior Max image.dockerimagemetadata.size less than or equal to Max[<resource>] Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quota. The REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA environment variable must be set to true . By default, the environment variable is set to true for new deployments. 4.3.4. Image stream limits Supported Resources: openshift.io/image-tags openshift.io/images Resource type name: openshift.io/ImageStream Per image stream, the following must hold true if specified: Table 4.6. ImageStream Constraint Behavior Max[openshift.io/image-tags] length( uniqueimagetags( imagestream.spec.tags ) ) less than or equal to Max[openshift.io/image-tags] uniqueimagetags returns unique references to images of given spec tags. Max[openshift.io/images] length( uniqueimages( imagestream.status.tags ) ) less than or equal to Max[openshift.io/images] uniqueimages returns unique image names found in status tags. The name is equal to the digest for the image. 4.3.5. Counting of image references The openshift.io/image-tags resource represents unique stream limits. Possible references are an ImageStreamTag , an ImageStreamImage , or a DockerImage . Tags can be created by using the oc tag and oc import-image commands or by using image streams. No distinction is made between internal and external references. However, each unique reference that is tagged in an image stream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names that are recorded in image stream status. It helps to restrict several images that can be pushed to the internal registry. Internal and external references are not distinguished. 4.3.6. PersistentVolumeClaim limits Supported Resources: Storage Supported Constraints: Across all persistent volume claims in a project, the following must hold true: Table 4.7. Pod Constraint Enforced Behavior Min Min[<resource>] <= claim.spec.resources.requests[<resource>] (required) Max claim.spec.resources.requests[<resource>] (required) <= Max[<resource>] Limit Range Object Definition { "apiVersion": "v1", "kind": "LimitRange", "metadata": { "name": "pvcs" 1 }, "spec": { "limits": [{ "type": "PersistentVolumeClaim", "min": { "storage": "2Gi" 2 }, "max": { "storage": "50Gi" 3 } } ] } } 1 The name of the limit range object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. Additional resources For information on stream limits, see managing images streams . For information on stream limits . For more information on compute resource constraints . For more information on how CPU and memory are measured, see Recommended control plane practices . You can specify limits and requests for ephemeral storage. For more information on this feature, see Understanding ephemeral storage . 4.4. Limit range operations 4.4.1. Creating a limit range Shown here is an example procedure to follow for creating a limit range. Procedure Create the object: USD oc create -f <limit_range_file> -n <project> 4.4.2. View the limit You can view any limit ranges that are defined in a project by navigating in the web console to the Quota page for the project. You can also use the CLI to view limit range details by performing the following steps: Procedure Get the list of limit range objects that are defined in the project. For example, a project called demoproject : USD oc get limits -n demoproject Example Output NAME AGE resource-limits 6d Describe the limit range. For example, for a limit range called resource-limits : USD oc describe limits resource-limits -n demoproject Example Output Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - 4.4.3. Deleting a limit range To remove a limit range, run the following command: + USD oc delete limits <limit_name> S Additional resources For information about enforcing different limits on the number of projects that your users can create, managing limits, and quota on project resources, see Resource quotas per projects .
[ "oc create quota <name> --hard=count/<resource>.<group>=<quota> 1", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0", "cat gpu-quota.yaml", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "oc create pod gpu-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 requests.ephemeral-storage: 2Gi 4 limits.cpu: \"2\" 5 limits.memory: 2Gi 6 limits.ephemeral-storage: 4Gi 7", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 limits.ephemeral-storage: \"4Gi\" 4 scopes: - NotTerminating 5", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 limits.ephemeral-storage: \"1Gi\" 4 scopes: - Terminating 5", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f <resource_quota_definition> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota \"test\" created oc describe quota test Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc get quota -n demoproject NAME AGE besteffort 11m compute-resources 2m core-object-counts 29m", "oc describe quota core-object-counts -n demoproject Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "kubernetesMasterConfig: apiLevels: - v1beta3 - v1 apiServerArguments: null controllerArguments: resource-quota-sync-period: - \"10s\"", "master-restart api master-restart controllers", "admissionConfig: pluginConfig: ResourceQuota: configuration: apiVersion: resourcequota.admission.k8s.io/v1alpha1 kind: Configuration limitedResources: - resource: persistentvolumeclaims 1 matchContains: - gold.storageclass.storage.k8s.io/requests.storage 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"core-resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 - type: \"Container\" max: cpu: \"2\" 6 memory: \"1Gi\" 7 min: cpu: \"100m\" 8 memory: \"4Mi\" 9 default: cpu: \"300m\" 10 memory: \"200Mi\" 11 defaultRequest: cpu: \"200m\" 12 memory: \"100Mi\" 13 maxLimitRequestRatio: cpu: \"10\" 14", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"openshift-resource-limits\" spec: limits: - type: openshift.io/Image max: storage: 1Gi 1 - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 - type: \"Pod\" max: cpu: \"2\" 4 memory: \"1Gi\" 5 ephemeral-storage: \"1Gi\" 6 min: cpu: \"1\" 7 memory: \"1Gi\" 8", "{ \"apiVersion\": \"v1\", \"kind\": \"LimitRange\", \"metadata\": { \"name\": \"pvcs\" 1 }, \"spec\": { \"limits\": [{ \"type\": \"PersistentVolumeClaim\", \"min\": { \"storage\": \"2Gi\" 2 }, \"max\": { \"storage\": \"50Gi\" 3 } } ] } }", "oc create -f <limit_range_file> -n <project>", "oc get limits -n demoproject", "NAME AGE resource-limits 6d", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - -", "oc delete limits <limit_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/compute-resource-quotas
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of Red Hat build of OpenJDK is available in two versions, Red Hat build of OpenJDK 8u and Red Hat build of OpenJDK 11u. Packages for the Red Hat build of Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Container Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.10/pr01
Deploying OpenShift Data Foundation in external mode
Deploying OpenShift Data Foundation in external mode Red Hat OpenShift Data Foundation 4.17 Instructions for deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem. Red Hat Storage Documentation Team Abstract Read this document for instructions on installing Red Hat OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster or IBM FlashSystem.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_in_external_mode/index
Chapter 47. Paho
Chapter 47. Paho Both producer and consumer are supported Paho component provides connector for the MQTT messaging protocol using the Eclipse Paho library . Paho is one of the most popular MQTT libraries, so if you would like to integrate it with your Java project - Camel Paho connector is a way to go. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-paho</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> 47.1. URI format paho:topic[?options] Where topic is the name of the topic. 47.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 47.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 47.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 47.3. Component Options The Paho component supports 31 options, which are listed below. Name Description Default Type automaticReconnect (common) Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true boolean brokerUrl (common) The URL of the MQTT broker. tcp://localhost:1883 String cleanSession (common) Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true boolean clientId (common) MQTT client identifier. The identifier must be unique. String configuration (common) To use the shared Paho configuration. PahoConfiguration connectionTimeout (common) Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 int filePersistenceDirectory (common) Base directory used by file persistence. Will by default use user directory. String keepAliveInterval (common) Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 int maxInflight (common) Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. 10 int maxReconnectDelay (common) Get the maximum time (in millis) to wait between reconnects. 128000 int mqttVersion (common) Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. int persistence (common) Client persistence to be used - memory or file. Enum values: FILE MEMORY MEMORY PahoPersistence qos (common) Client quality of service level (0-2). 2 int retained (common) Retain option. false boolean serverURIs (common) Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String willPayload (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String willQos (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. int willRetained (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. false boolean willTopic (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean client (advanced) To use a shared Paho client. MqttClient customWebSocketHeaders (advanced) Sets the Custom WebSocket Headers for the WebSocket Connection. Properties executorServiceTimeout (advanced) Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 int httpsHostnameVerificationEnabled (security) Whether SSL HostnameVerifier is enabled or not. The default value is true. true boolean password (security) Password to be used for authentication against the MQTT broker. String socketFactory (security) Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. SocketFactory sslClientProps (security) Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. Properties sslHostnameVerifier (security) Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. HostnameVerifier userName (security) Username to be used for authentication against the MQTT broker. String 47.4. Endpoint Options The Paho endpoint is configured using URI syntax: with the following path and query parameters: 47.4.1. Path Parameters (1 parameters) Name Description Default Type topic (common) Required Name of the topic. String 47.4.2. Query Parameters (31 parameters) Name Description Default Type automaticReconnect (common) Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true boolean brokerUrl (common) The URL of the MQTT broker. tcp://localhost:1883 String cleanSession (common) Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true boolean clientId (common) MQTT client identifier. The identifier must be unique. String connectionTimeout (common) Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 int filePersistenceDirectory (common) Base directory used by file persistence. Will by default use user directory. String keepAliveInterval (common) Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 int maxInflight (common) Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. 10 int maxReconnectDelay (common) Get the maximum time (in millis) to wait between reconnects. 128000 int mqttVersion (common) Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. int persistence (common) Client persistence to be used - memory or file. Enum values: FILE MEMORY MEMORY PahoPersistence qos (common) Client quality of service level (0-2). 2 int retained (common) Retain option. false boolean serverURIs (common) Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String willPayload (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String willQos (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. int willRetained (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. false boolean willTopic (common) Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean client (advanced) To use an existing mqtt client. MqttClient customWebSocketHeaders (advanced) Sets the Custom WebSocket Headers for the WebSocket Connection. Properties executorServiceTimeout (advanced) Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 int httpsHostnameVerificationEnabled (security) Whether SSL HostnameVerifier is enabled or not. The default value is true. true boolean password (security) Password to be used for authentication against the MQTT broker. String socketFactory (security) Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. SocketFactory sslClientProps (security) Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. Properties sslHostnameVerifier (security) Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. HostnameVerifier userName (security) Username to be used for authentication against the MQTT broker. String 47.5. Headers The following headers are recognized by the Paho component: Header Java constant Endpoint type Value type Description CamelMqttTopic PahoConstants.MQTT_TOPIC Consumer String The name of the topic CamelMqttQoS PahoConstants.MQTT_QOS Consumer Integer QualityOfService of the incoming message CamelPahoOverrideTopic PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC Producer String Name of topic to override and send to instead of topic specified on endpoint 47.6. Default payload type By default Camel Paho component operates on the binary payloads extracted out of (or put into) the MQTT message: // Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody("paho:topic"); // Send payload byte[] payload = "message".getBytes(); producerTemplate.sendBody("paho:topic", payload); But of course Camel build-in type conversion API can perform the automatic data type transformations for you. In the example below Camel automatically converts binary payload into String (and conversely): // Receive payload String payload = consumerTemplate.receiveBody("paho:topic", String.class); // Send payload String payload = "message"; producerTemplate.sendBody("paho:topic", payload); 47.7. Samples For example the following snippet reads messages from the MQTT broker installed on the same host as the Camel router: from("paho:some/queue") .to("mock:test"); While the snippet below sends message to the MQTT broker: from("direct:test") .to("paho:some/target/queue"); For example this is how to read messages from the remote MQTT broker: from("paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883") .to("mock:test"); And here we override the default topic and set to a dynamic topic from("direct:test") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple("USD{header.customerId}")) .to("paho:some/target/queue"); 47.8. Spring Boot Auto-Configuration When using paho with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-starter</artifactId> </dependency> The component supports 32 options, which are listed below. Name Description Default Type camel.component.paho.automatic-reconnect Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. If set to false, the client will not attempt to automatically reconnect to the server in the event that the connection is lost. If set to true, in the event that the connection is lost, the client will attempt to reconnect to the server. It will initially wait 1 second before it attempts to reconnect, for every failed reconnect attempt, the delay will double until it is at 2 minutes at which point the delay will stay at 2 minutes. true Boolean camel.component.paho.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.paho.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.paho.broker-url The URL of the MQTT broker. tcp://localhost:1883 String camel.component.paho.clean-session Sets whether the client and server should remember state across restarts and reconnects. If set to false both the client and server will maintain state across restarts of the client, the server and the connection. As state is maintained: Message delivery will be reliable meeting the specified QOS even if the client, server or connection are restarted. The server will treat a subscription as durable. If set to true the client and server will not maintain state across restarts of the client, the server or the connection. This means Message delivery to the specified QOS cannot be maintained if the client, server or connection are restarted The server will treat a subscription as non-durable. true Boolean camel.component.paho.client To use a shared Paho client. The option is a org.eclipse.paho.client.mqttv3.MqttClient type. MqttClient camel.component.paho.client-id MQTT client identifier. The identifier must be unique. String camel.component.paho.configuration To use the shared Paho configuration. The option is a org.apache.camel.component.paho.PahoConfiguration type. PahoConfiguration camel.component.paho.connection-timeout Sets the connection timeout value. This value, measured in seconds, defines the maximum time interval the client will wait for the network connection to the MQTT server to be established. The default timeout is 30 seconds. A value of 0 disables timeout processing meaning the client will wait until the network connection is made successfully or fails. 30 Integer camel.component.paho.custom-web-socket-headers Sets the Custom WebSocket Headers for the WebSocket Connection. The option is a java.util.Properties type. Properties camel.component.paho.enabled Whether to enable auto configuration of the paho component. This is enabled by default. Boolean camel.component.paho.executor-service-timeout Set the time in seconds that the executor service should wait when terminating before forcefully terminating. It is not recommended to change this value unless you are absolutely sure that you need to. 1 Integer camel.component.paho.file-persistence-directory Base directory used by file persistence. Will by default use user directory. String camel.component.paho.https-hostname-verification-enabled Whether SSL HostnameVerifier is enabled or not. The default value is true. true Boolean camel.component.paho.keep-alive-interval Sets the keep alive interval. This value, measured in seconds, defines the maximum time interval between messages sent or received. It enables the client to detect if the server is no longer available, without having to wait for the TCP/IP timeout. The client will ensure that at least one message travels across the network within each keep alive period. In the absence of a data-related message during the time period, the client sends a very small ping message, which the server will acknowledge. A value of 0 disables keepalive processing in the client. The default value is 60 seconds. 60 Integer camel.component.paho.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.paho.max-inflight Sets the max inflight. please increase this value in a high traffic environment. The default value is 10. 10 Integer camel.component.paho.max-reconnect-delay Get the maximum time (in millis) to wait between reconnects. 128000 Integer camel.component.paho.mqtt-version Sets the MQTT version. The default action is to connect with version 3.1.1, and to fall back to 3.1 if that fails. Version 3.1.1 or 3.1 can be selected specifically, with no fall back, by using the MQTT_VERSION_3_1_1 or MQTT_VERSION_3_1 options respectively. Integer camel.component.paho.password Password to be used for authentication against the MQTT broker. String camel.component.paho.persistence Client persistence to be used - memory or file. PahoPersistence camel.component.paho.qos Client quality of service level (0-2). 2 Integer camel.component.paho.retained Retain option. false Boolean camel.component.paho.server-u-r-is Set a list of one or more serverURIs the client may connect to. Multiple servers can be separated by comma. Each serverURI specifies the address of a server that the client may connect to. Two types of connection are supported tcp:// for a TCP connection and ssl:// for a TCP connection secured by SSL/TLS. For example: tcp://localhost:1883 ssl://localhost:8883 If the port is not specified, it will default to 1883 for tcp:// URIs, and 8883 for ssl:// URIs. If serverURIs is set then it overrides the serverURI parameter passed in on the constructor of the MQTT client. When an attempt to connect is initiated the client will start with the first serverURI in the list and work through the list until a connection is established with a server. If a connection cannot be made to any of the servers then the connect attempt fails. Specifying a list of servers that a client may connect to has several uses: High Availability and reliable message delivery Some MQTT servers support a high availability feature where two or more equal MQTT servers share state. An MQTT client can connect to any of the equal servers and be assured that messages are reliably delivered and durable subscriptions are maintained no matter which server the client connects to. The cleansession flag must be set to false if durable subscriptions and/or reliable message delivery is required. Hunt List A set of servers may be specified that are not equal (as in the high availability option). As no state is shared across the servers reliable message delivery and durable subscriptions are not valid. The cleansession flag must be set to true if the hunt list mode is used. String camel.component.paho.socket-factory Sets the SocketFactory to use. This allows an application to apply its own policies around the creation of network sockets. If using an SSL connection, an SSLSocketFactory can be used to supply application-specific security settings. The option is a javax.net.SocketFactory type. SocketFactory camel.component.paho.ssl-client-props Sets the SSL properties for the connection. Note that these properties are only valid if an implementation of the Java Secure Socket Extensions (JSSE) is available. These properties are not used if a custom SocketFactory has been set. The following properties can be used: com.ibm.ssl.protocol One of: SSL, SSLv3, TLS, TLSv1, SSL_TLS. com.ibm.ssl.contextProvider Underlying JSSE provider. For example IBMJSSE2 or SunJSSE com.ibm.ssl.keyStore The name of the file that contains the KeyStore object that you want the KeyManager to use. For example /mydir/etc/key.p12 com.ibm.ssl.keyStorePassword The password for the KeyStore object that you want the KeyManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.keyStoreType Type of key store, for example PKCS12, JKS, or JCEKS. com.ibm.ssl.keyStoreProvider Key store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.trustStore The name of the file that contains the KeyStore object that you want the TrustManager to use. com.ibm.ssl.trustStorePassword The password for the TrustStore object that you want the TrustManager to use. The password can either be in plain-text, or may be obfuscated using the static method: com.ibm.micro.security.Password.obfuscate(char password). This obfuscates the password using a simple and insecure XOR and Base64 encoding mechanism. Note that this is only a simple scrambler to obfuscate clear-text passwords. com.ibm.ssl.trustStoreType The type of KeyStore object that you want the default TrustManager to use. Same possible values as keyStoreType. com.ibm.ssl.trustStoreProvider Trust store provider, for example IBMJCE or IBMJCEFIPS. com.ibm.ssl.enabledCipherSuites A list of which ciphers are enabled. Values are dependent on the provider, for example: SSL_RSA_WITH_AES_128_CBC_SHA;SSL_RSA_WITH_3DES_EDE_CBC_SHA. com.ibm.ssl.keyManager Sets the algorithm that will be used to instantiate a KeyManagerFactory object instead of using the default algorithm available in the platform. Example values: IbmX509 or IBMJ9X509. com.ibm.ssl.trustManager Sets the algorithm that will be used to instantiate a TrustManagerFactory object instead of using the default algorithm available in the platform. Example values: PKIX or IBMJ9X509. The option is a java.util.Properties type. Properties camel.component.paho.ssl-hostname-verifier Sets the HostnameVerifier for the SSL connection. Note that it will be used after handshake on a connection and you should do actions by yourself when hostname is verified error. There is no default HostnameVerifier. The option is a javax.net.ssl.HostnameVerifier type. HostnameVerifier camel.component.paho.user-name Username to be used for authentication against the MQTT broker. String camel.component.paho.will-payload Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String camel.component.paho.will-qos Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. Integer camel.component.paho.will-retained Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. false Boolean camel.component.paho.will-topic Sets the Last Will and Testament (LWT) for the connection. In the event that this client unexpectedly loses its connection to the server, the server will publish a message to itself using the supplied details. The topic to publish to The byte payload for the message. The quality of service to publish the message at (0, 1 or 2). Whether or not the message should be retained. String
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-paho</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>", "paho:topic[?options]", "paho:topic", "// Receive payload byte[] payload = (byte[]) consumerTemplate.receiveBody(\"paho:topic\"); // Send payload byte[] payload = \"message\".getBytes(); producerTemplate.sendBody(\"paho:topic\", payload);", "// Receive payload String payload = consumerTemplate.receiveBody(\"paho:topic\", String.class); // Send payload String payload = \"message\"; producerTemplate.sendBody(\"paho:topic\", payload);", "from(\"paho:some/queue\") .to(\"mock:test\");", "from(\"direct:test\") .to(\"paho:some/target/queue\");", "from(\"paho:some/queue?brokerUrl=tcp://iot.eclipse.org:1883\") .to(\"mock:test\");", "from(\"direct:test\") .setHeader(PahoConstants.CAMEL_PAHO_OVERRIDE_TOPIC, simple(\"USD{header.customerId}\")) .to(\"paho:some/target/queue\");", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-paho-component-starter
Chapter 12. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure
Chapter 12. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.16, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the GCP APIs. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 12.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 12.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 12.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 12.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 12.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 12.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 12.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 12.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 12.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 12.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 12.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 12.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 12.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The following roles are applied to the service accounts that the control plane and compute machines use: Table 12.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin roles/artifactregistry.reader 12.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 12.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.globalAddresses.create compute.globalAddresses.get compute.globalAddresses.use compute.globalForwardingRules.create compute.globalForwardingRules.get compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.networks.use compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 12.2. Required permissions for creating load balancer resources compute.backendServices.create compute.backendServices.get compute.backendServices.list compute.backendServices.update compute.backendServices.use compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use compute.targetTcpProxies.create compute.targetTcpProxies.get compute.targetTcpProxies.use Example 12.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 12.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 12.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 12.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 12.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly compute.regionHealthChecks.create compute.regionHealthChecks.get compute.regionHealthChecks.useReadOnly Example 12.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 12.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 12.10. Required IAM permissions for installation iam.roles.get Example 12.11. Required permissions when authenticating without a service account key iam.serviceAccounts.signBlob Example 12.12. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 12.13. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 12.14. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.addresses.setLabels compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.globalAddresses.delete compute.globalAddresses.list compute.globalForwardingRules.delete compute.globalForwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 12.15. Required permissions for deleting load balancer resources compute.backendServices.delete compute.backendServices.list compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list compute.targetTcpProxies.delete compute.targetTcpProxies.list Example 12.16. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 12.17. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 12.18. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 12.19. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 12.20. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list compute.regionHealthChecks.delete compute.regionHealthChecks.list Example 12.21. Required Images permissions for deletion compute.images.delete compute.images.list Example 12.22. Required permissions to get Region related information compute.regions.get Example 12.23. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list Additional resources Optimizing storage 12.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: africa-south1 (Johannesburg, South Africa) asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-central2 (Dammam, Saudi Arabia, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 12.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 12.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 12.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 12.24. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D Tau T2D 12.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 12.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 12.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 12.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 12.6.3. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 12.6.4. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 12.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.6.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 12.7. Exporting common variables 12.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 12.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 12.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 12.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 12.25. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 12.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 12.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 12.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 12.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 12.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 12.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 12.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 12.26. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 12.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 12.27. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 12.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 12.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 12.28. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 12.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 12.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 12.29. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 12.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 12.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 12.30. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 12.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 12.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Ensure you installed pyOpenSSL. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 12.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 12.31. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 12.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , Creating IAM roles in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 12.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 12.32. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 12.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 12.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 12.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 12.33. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 12.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You installed the oc CLI. Ensure the bootstrap process completed successfully. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.20. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 12.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Ensure you defined the variables in the Exporting common variables section. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Ensure the bootstrap process completed successfully. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 12.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Ensure the bootstrap process completed successfully. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 12.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.25. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/installing-restricted-networks-gcp
Chapter 3. Legacy Support
Chapter 3. Legacy Support 3.1. Legacy /lifecycle endpoint support Abstract Support legacy API endpoint /lifecycle. JSON XML 3.2. Parameters Name Description Example Default products Index of products filter. Multi-products seperated by comma(,) is suppported. products=Red%20Hat%20Enterprise%20Linux,Openshift%20Container%20Platform all all_versions Index of all versions including ones that are out of Red Hat service. all_versions=false true
[ "GET /plccapi/lifecycle.json", "GET /plccapi/lifecycle.xml" ]
https://docs.redhat.com/en/documentation/red_hat_product_life_cycle_data_api/1.0/html/red_hat_product_life_cycle_data_api/legacy_support
Chapter 9. Rulebook activations troubleshooting
Chapter 9. Rulebook activations troubleshooting Occasionally, rulebook activations might fail due to a variety of reasons that can be resolved. This section contains a list of possible issues and how you can resolve them. 9.1. Activation stuck in Pending state Perform the following steps if your rulebook activation is stuck in Pending state. Procedure Confirm whether there are other running activations and if you have reached the limits (for example, memory or CPU limits). If there are other activations running, terminate one or more of them, if possible. If not, check that the default worker, Redis, and activation worker are all running. If all systems are working as expected, check your eda-server internal logs in the worker, scheduler, API, and nginx containers and services to see if the problem can be determined. Note These logs reveal the source of the issue, such as an exception thrown by the code, a runtime error with network issues, or an error with the rulebook code. If your internal logs do not provide information that leads to resolution, report the issue to Red Hat support. If you need to make adjustments, see the Modifying the number of simultaneous rulebook activations . Note To adjust the maximum number of simultaneous activations for Ansible Automation Platform Operator on OpenShift Container Platform deployments, see Modifying the number of simultaneous rulebook activations during or after Event-Driven Ansible controller installation in Installing on OpenShift Container Platform . 9.2. Activation keeps restarting Perform the following steps if your rulebook activation keeps restarting. Procedure Log in to Ansible Automation Platform. From the navigation panel, select Automation Decisions Rulebook Activations . From the Rulebook Activations page, select the activation in your list that keeps restarting. The Details page is displayed. Click the History tab for more information and select the rulebook activation that keeps restarting. The Details tab is displayed and shows the output information. Check the Restart policy field for your activation. There are three selections available: On failure (restarts a rulebook activation when the container process fails), Always (always restarts regardless of success or failure with no more than 5 restarts), or Never (never restarts when the container process ends). Confirm that your rulebook activation Restart policy is set to On failure . This is an indication that an issue is causing it to fail. To possibly diagnose the problem, check the YAML code and the instance logs of the rulebook activation for errors. If you cannot find a solution with the restart policy values, proceed to the steps related to the Log level . Check your log level for your activation. If your default log level is Error , go back to the Rulebook Activation page and recreate your activation following procedures in Setting up rulebook a activation . Change the Log level to Debug . Run the activation again and navigate to the History tab from the activation details page. On the History page, click one of your recent activations and view the Output . 9.3. Event streams not sending events to activation If you are using event streams to send events to your rulebook activations, occasionally those events might not be successfully routed to your rulebook activation. Procedure Try the following options to resolve this. Ensure that each of your event streams in Event-Driven Ansible controller is not in Test mode . This means activations would not receive the events. Verify that the origin service is sending the request properly. Check that the network connection to your platform gateway instance is stable. If you have set up event streams, this is the entry of the event stream request from the sender. Verify that the proxy in the platform gateway is running. Confirm that the event stream worker is up and running, and able to process the request. Verify that your credential is correctly set up in the event stream. Confirm that the request complies with the authentication mechanism determined by the set credential (for example, basic must contain a header with the credentials or HMAC must contain the signature of the content in a header, and similar). Note The credentials might have been changed in Event-Driven Ansible controller, but not updated in the origin service. Verify that the rulebook that is running in the activation reacts to these events. This would indicate that you wrote down the event source and added actions that consume the events coming in. Otherwise, the event does reach the activation but there is nothing to activate it. If you are using self-signed certificates, you might want to disable certificate validation when sending webhooks from vendors. Most of the vendors have an option to disable certificate validation for testing or non-production environments. 9.4. Cannot connect to the 2.5 automation controller when running activations You might experience a failed connection to automation controller when you run your activations. Procedure To help resolve the issue, confirm that you have set up a Red Hat Ansible Automation Platform credential and have obtained the correct automation controller URL. If you have not set up a Red Hat Ansible Automation Platform credential, follow the procedures in Setting up a Red Hat Ansible Automation Platform credential . Ensure that this credential has the host set to the following URL format: https://<your_gateway>/api/controller When you have completed this process, try setting up your rulebook activation again.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-rulebook-troubleshooting
Chapter 1. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer
Chapter 1. Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer From OpenShift Container Platform 4.14 and later versions, you can use the Assisted Installer to install a cluster on Oracle(R) Cloud Infrastructure (OCI) by using infrastructure that you provide. 1.1. The Assisted Installer and OCI overview You can run cluster workloads on Oracle(R) Cloud Infrastructure (OCI) infrastructure that supports dedicated, hybrid, public, and multiple cloud environments. Both Red Hat and Oracle test, validate, and support running OCI in an OpenShift Container Platform cluster on OCI. The Assisted Installer supports the OCI platform, and you can use the Assisted Installer to access an intuitive interactive workflow for the purposes of automating cluster installation tasks on OCI. Figure 1.1. Workflow for using the Assisted Installer in a connected environment to install a cluster on OCI OCI provides services that can meet your needs for regulatory compliance, performance, and cost-effectiveness. You can access OCI Resource Manager configurations to provision and configure OCI resources. Important The steps for provisioning OCI resources are provided as an example only. You can also choose to create the required resources through other methods; the scripts are just an example. Installing a cluster with infrastructure that you provide requires knowledge of the cloud provider and the installation process on OpenShift Container Platform. You can access OCI Resource Manager configurations to complete these steps, or use the configurations to model your own custom script. Follow the steps in the Installing a cluster on Oracle Cloud Infrastructure (OCI) by using the Assisted Installer document to understand how to use the Assisted Installer to install a OpenShift Container Platform cluster on OCI. This document demonstrates the use of the OCI Cloud Controller Manager (CCM) and Oracle's Container Storage Interface (CSI) objects to link your OpenShift Container Platform cluster with the OCI API. Important To ensure the best performance conditions for your cluster workloads that operate on OCI, ensure that volume performance units (VPUs) for your block volume are sized for your workloads. The following list provides guidance for selecting the VPUs needed for specific performance needs: Test or proof of concept environment: 100 GB, and 20 to 30 VPUs. Basic environment: 500 GB, and 60 VPUs. Heavy production environment: More than 500 GB, and 100 or more VPUs. Consider reserving additional VPUs to provide sufficient capacity for updates and scaling activities. For more information about VPUs, see Volume Performance Units (Oracle documentation). If you are unfamiliar with the OpenShift Container Platform Assisted Installer, see "Assisted Installer for OpenShift Container Platform". Additional resources Assisted Installer for OpenShift Container Platform Internet access for OpenShift Container Platform Volume Performance Units (Oracle documentation) Instance Sizing Recommendations for OpenShift Container Platform on OCI Nodes (Oracle) documentation 1.2. Creating OCI resources and services Create Oracle(R) Cloud Infrastructure (OCI) resources and services so that you can establish infrastructure with governance standards that meets your organization's requirements. Prerequisites You configured an OCI account to host the cluster. See Prerequisites (Oracle documentation) . Procedure Log in to your Oracle Cloud Infrastructure (OCI) account with administrator privileges. Download an archive file from an Oracle resource. The archive file includes files for creating cluster resources and custom manifests. The archive file also includes a script, and when you run the script, the script creates OCI resources, such as DNS records, an instance, and so on. For more information, see Configuration Files (Oracle documentation) . 1.3. Using the Assisted Installer to generate an OCI-compatible discovery ISO image Generate a discovery ISO image and upload the image to Oracle(R) Cloud Infrastructure (OCI), so that the agent can perform hardware and network validation checks before you install an OpenShift Container Platform cluster on OCI. From the OCI web console, you must create the following resources: A compartment for better organizing, restricting access, and setting usage limits to OCI resources. An object storage bucket for safely and securely storing the discovery ISO image. You can access the image at a later stage for the purposes of booting the instances, so that you can then create your cluster. Prerequisites You created a child compartment and an object storage bucket on OCI. See Provisioning Cloud Infrastructure (OCI Console) in the Oracle documentation. You reviewed details about the OpenShift Container Platform installation and update processes. If you use a firewall and you plan to use a Telemetry service, you configured your firewall to allow OpenShift Container Platform to access the sites required. Before you create a virtual machines (VM), see Cloud instance types (Red Hat Ecosystem Catalog portal) to identify the supported OCI VM shapes. Procedure From the Install OpenShift with the Assisted Installer page on the Hybrid Cloud Console, generate the discovery ISO image by completing all the required Assisted Installer steps. In the Cluster Details step, complete the following fields: Field Action required Cluster name Specify the name of your cluster, such as ocidemo . Base domain Specify the base domain of the cluster, such as splat-oci.devcluster.openshift.com . Provided you previously created a compartment on OCI, you can get this information by going to DNS management Zones List scope and then selecting the parent compartment. Your base domain should show under the Public zones tab. OpenShift version Specify OpenShift 4.15 or a later version. CPU architecture Specify x86_64 or Arm64 . Integrate with external partner platforms Specify Oracle Cloud Infrastructure . After you specify this value, the Include custom manifests checkbox is selected by default. On the Operators page, click . On the Host Discovery page, click Add hosts . For the SSH public key field, add your SSH key from your local system. Tip You can create an SSH authentication key pair by using the ssh-keygen tool. Click Generate Discovery ISO to generate the discovery ISO image file. Download the file to your local system. Upload the discovery ISO image to the OCI bucket. See Uploading an Object Storage Object to a Bucket (Oracle documentation) . You must create a pre-authenticated request for your uploaded discovery ISO image. Ensure that you make note of the URL from the pre-authenticated request, because you must specify the URL at a later stage when you create an OCI stack. Additional resources Installation and update Configuring your firewall 1.4. Provisioning OCI infrastructure for your cluster By using the Assisted Installer to create details for your OpenShift Container Platform cluster, you can specify these details in a stack. A stack is an OCI feature where you can automate the provisioning of all necessary OCI infrastructure resources, such as the custom image, that are required for installing an OpenShift Container Platform cluster on OCI. The Oracle(R) Cloud Infrastructure (OCI) Compute Service creates a virtual machine (VM) instance on OCI. This instance can then automatically attach to a virtual network interface controller (vNIC) in the virtual cloud network (VCN) subnet. On specifying the IP address of your OpenShift Container Platform cluster in the custom manifest template files, the OCI instance can communicate with your cluster over the VCN. Prerequisites You uploaded the discovery ISO image to the OCI bucket. For more information, see "Using the Assisted Installer to generate an OCI-compatible discovery ISO image". Procedure Complete the steps for provisioning OCI infrastructure for your OpenShift Container Platform cluster. See Creating OpenShift Container Platform Infrastructure Using Resource Manager (Oracle documentation) . Create a stack, and then edit the custom manifest files according to the steps in the Editing the OpenShift Custom Manifests (Oracle documentation) . 1.5. Completing the remaining Assisted Installer steps After you provision Oracle(R) Cloud Infrastructure (OCI) resources and upload OpenShift Container Platform custom manifest configuration files to OCI, you must complete the remaining cluster installation steps on the Assisted Installer before you can create an instance OCI. Prerequisites You created a resource stack on OCI that includes the custom manifest configuration files and OCI Resource Manager configuration resources. See "Provisioning OCI infrastructure for your cluster". Procedure From the Red Hat Hybrid Cloud Console web console, go to the Host discovery page. Under the Role column, select either Control plane node or Worker for each targeted hostname. Important Before, you can continue to the steps, wait for each node to reach the Ready status. Accept the default settings for the Storage and Networking steps, and then click . On the Custom manifests page, in the Folder field, select manifest . This is the Assisted Installer folder where you want to save the custom manifest file. In the File name field, enter a value such as oci-ccm.yml . From the Content section, click Browse , and select the CCM manifest from your drive located in custom_manifest/manifests/oci-ccm.yml . Expand the Custom manifest section and repeat the same steps for the following manifests: CSI driver manifest: custom_manifest/manifests/oci-csi.yml CCM machine configuration: custom_manifest/openshift/machineconfig-ccm.yml CSI driver machine configuration: custom_manifest/openshift/machineconfig-csi.yml From the Review and create page, click Install cluster to create your OpenShift Container Platform cluster on OCI. After the cluster installation and initialization operations, the Assisted Installer indicates the completion of the cluster installation operation. For more information, see "Completing the installation" section in the Assisted Installer for OpenShift Container Platform document. Additional resources Assisted Installer for OpenShift Container Platform 1.6. Verifying a successful cluster installation on OCI Verify that your cluster was installed and is running effectively on Oracle(R) Cloud Infrastructure (OCI). Procedure From the Hybrid Cloud Console, go to Clusters > Assisted Clusters and select your cluster's name. Check that the Installation progress bar is at 100% and a message displays indicating "Installation completed successfully". To access the OpenShift Container Platform web console, click the provided Web Console URL. Go to the Nodes menu page. Locate your node from the Nodes table. From the Overview tab, check that your node has a Ready status. Select the YAML tab. Check the labels parameter, and verify that the listed labels apply to your configuration. For example, the topology.kubernetes.io/region=us-sanjose-1 label indicates in what OCI region the node was deployed. 1.7. Troubleshooting the installation of a cluster on OCI If you experience issues with using the Assisted Installer to install an OpenShift Container Platform cluster on Oracle(R) Cloud Infrastructure (OCI), read the following sections to troubleshoot common problems. The Ingress Load Balancer in OCI is not at a healthy status This issue is classed as a Warning because by using the Resource Manager to create a stack, you created a pool of compute nodes, 3 by default, that are automatically added as backend listeners for the Ingress Load Balancer. By default, the OpenShift Container Platform deploys 2 router pods, which are based on the default values from the OpenShift Container Platform manifest files. The Warning is expected because a mismatch exists with the number of router pods available, two, to run on the three compute nodes. Figure 1.2. Example of a Warning message that is under the Backend set information tab on OCI: You do not need to modify the Ingress Load Balancer configuration. Instead, you can point the Ingress Load Balancer to specific compute nodes that operate in your cluster on OpenShift Container Platform. To do this, use placement mechanisms, such as annotations, on OpenShift Container Platform to ensure router pods only run on the compute nodes that you originally configured on the Ingress Load Balancer as backend listeners. OCI create stack operation fails with an Error: 400-InvalidParameter message On attempting to create a stack on OCI, you identified that the Logs section of the job outputs an error message. For example: Error: 400-InvalidParameter, DNS Label oci-demo does not follow Oracle requirements Suggestion: Please update the parameter(s) in the Terraform config as per error message DNS Label oci-demo does not follow Oracle requirements Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_vcn Go to the Install OpenShift with the Assisted Installer page on the Hybrid Cloud Console, and check the Cluster name field on the Cluster Details step. Remove any special characters, such as a hyphen ( - ), from the name, because these special characters are not compatible with the OCI naming conventions. For example, change oci-demo to ocidemo . Additional resources Troubleshooting OpenShift Container Platform on OCI (Oracle documentation) Installing an on-premise cluster using the Assisted Installer
[ "Error: 400-InvalidParameter, DNS Label oci-demo does not follow Oracle requirements Suggestion: Please update the parameter(s) in the Terraform config as per error message DNS Label oci-demo does not follow Oracle requirements Documentation: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_vcn" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_oci/installing-oci-assisted-installer
Security APIs
Security APIs OpenShift Container Platform 4.12 Reference guide for security APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_apis/index
9.3. Performing Integrity Checks
9.3. Performing Integrity Checks To initiate a manual check, enter the following command as root : At a minimum, AIDE should be configured to run a weekly scan. At most, AIDE should be run daily. For example, to schedule a daily execution of AIDE at 4:05 am using cron (see the Automating System Tasks chapter in the System Administrator's Guide), add the following line to /etc/crontab :
[ "~]# aide --check AIDE found differences between database and filesystem!! Start timestamp: 2017-04-07 17:11:33 Summary: Total number of files: 104892 Added files: 7 Removed files: 0 Changed files: 52", "05 4 * * * root /usr/sbin/aide --check" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-aide_scan
Chapter 6. Understanding identity provider configuration
Chapter 6. Understanding identity provider configuration The OpenShift Container Platform master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 6.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 6.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . Once an identity provider has been defined, you can use RBAC to define and apply permissions . 6.3. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 6.4. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 6.5. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 6.6. Manually provisioning a user when using the lookup mapping method Typically, identities are automatically mapped to users during login. The lookup mapping method disables this automatic mapping, which requires you to provision users manually. If you are using the lookup mapping method, use the following procedure for each user after configuring the identity provider. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create an OpenShift Container Platform user: USD oc create user <username> Create an OpenShift Container Platform identity: USD oc create identity <identity_provider>:<identity_provider_user_id> Where <identity_provider_user_id> is a name that uniquely represents the user in the identity provider. Create a user identity mapping for the created user and identity: USD oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username> Additional resources How to create user, identity and map user and identity in LDAP authentication for mappingMethod as lookup inside the OAuth manifest How to create user, identity and map user and identity in OIDC authentication for mappingMethod as lookup
[ "oc delete secrets kubeadmin -n kube-system", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc create user <username>", "oc create identity <identity_provider>:<identity_provider_user_id>", "oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authentication_and_authorization/understanding-identity-provider
Chapter 6. Listing existing functions
Chapter 6. Listing existing functions You can list existing functions. You can do it using the kn func tool. 6.1. Listing existing functions You can list existing functions by using kn func list . If you want to list functions that have been deployed as Knative services, you can also use kn service list . Procedure List existing functions: USD kn func list [-n <namespace> -p <path>] Example output NAME NAMESPACE RUNTIME URL READY example-function default node http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com True List functions deployed as Knative services: USD kn service list -n <namespace> Example output NAME URL LATEST AGE CONDITIONS READY REASON example-function http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com example-function-gzl4c 16m 3 OK / 3 True
[ "kn func list [-n <namespace> -p <path>]", "NAME NAMESPACE RUNTIME URL READY example-function default node http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com True", "kn service list -n <namespace>", "NAME URL LATEST AGE CONDITIONS READY REASON example-function http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com example-function-gzl4c 16m 3 OK / 3 True" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/functions/serverless-functions-listing
8.17. Begin Installation
8.17. Begin Installation When all required sections of the Installation Summary screen have been completed, the admonition at the bottom of the menu screen disappears and the Begin Installation button becomes available. Figure 8.38. Ready to Install Warning Up to this point in the installation process, no lasting changes have been made on your computer. When you click Begin Installation , the installation program will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, return to the relevant section of the Installation Summary screen. To cancel installation completely, click Quit or switch off your computer. To switch off most computers at this stage, press the power button and hold it down for a few seconds. If you have finished customizing your installation and are certain that you want to proceed, click Begin Installation . After you click Begin Installation , allow the installation process to complete. If the process is interrupted, for example, by you switching off or resetting the computer, or by a power outage, you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-write-changes-to-disk-x86
Chapter 6. View OpenShift Data Foundation Topology
Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_z/viewing-odf-topology_mcg-verify
Appendix A. Running the certification tests by using cockpit
Appendix A. Running the certification tests by using cockpit Note Using cockpit to run the certification tests is optional . Use the following procedure to set up and run the certification tests by using cockpit. A.1. Configuring the system and running tests by using Cockpit To run the certification tests by using Cockpit you need to upload the test plan to the SUT first. After running the tests, download the results and review them. Note Although it is not mandatory, Red Hat recommends you to configure and use Cockpit for the certification process. Configuring cockpit greatly helps you to manage and monitor the certification process on the SUT. A.1.1. Setting up the Cockpit server Cockpit is a RHEL tool that lets you change the configuration of your systems as well as monitor their resources from a user-friendly web-based interface. Note You must set up Cockpit either on the SUT or a new system. Ensure that the Cockpit has access to SUT. Prerequisites The Cockpit server has RHEL version 8 or 9 installed. You have installed the Cockpit plugin on your system. You have enabled the Cockpit service. Procedure Log in to the system where you installed Cockpit. Install the Cockpit RPM provided by the Red Hat Certification team. By default, Cockpit runs on port 9090. Additional resources For more information about installing and configuring Cockpit, see Getting Started using the RHEL web console on RHEL 8, Getting Started using the RHEL web console on RHEL 9 and Introducing Cockpit . A.1.2. Adding system under test to Cockpit Adding the system under test (SUT) to Cockpit lets them communicate by using passwordless SSH. Prerequisites You have the IP address or hostname of the SUT. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser to launch the Cockpit web application. Enter the username and password, and then click Login . Click the down-arrow on the logged-in cockpit user name-> Add new host . The dialog box displays. In the Host field, enter the IP address or hostname of the system. In the User name field, enter the name you want to assign to this system. Optional: Select the predefined color or select a new color of your choice for the host added. Click Add . Click Accept key and connect to let Cockpit communicate with the SUT through passwordless SSH. Enter the Password . Select the Authorize SSH Key checkbox. Click Log in . Verification On the left panel, click Tools -> Red Hat Certification . Verify that the SUT you just added displays below the Hosts section on the right. A.1.3. Getting authorization on the Red Hat SSO network Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. On the Cockpit homepage, click Authorize , to establish connectivity with the Red Hat system. The Log in to your Red Hat account page displays. Enter your credentials and click . The Grant access to rhcert-cwe page displays. Click Grant access . A confirmation message displays a successful device login. You are now connected to the Cockpit web application. A.1.4. Downloading test plans in Cockpit from Red Hat certification portal For Non-authorized or limited access users: To download the test plan, see Downloading the test plan from Red Hat Certification portal . For authorized users: Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Test Plans tab. A list of Recent Certification Support Cases will appear. Click Download Test Plan . A message displays confirming the successful addition of the test plan. The downloaded test plan will be listed under the File Name of the Test Plan Files section. A.1.5. Using the test plan to prepare the system under test for testing Provisioning the system under test (SUT) includes the following operations: setting up passwordless SSH communication with cockpit installing the required packages on your system based on the certification type creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required software packages will be installed if the test plan is designed for certifying a software product. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab, and then click the host under test on which you want to run the tests. Click Provision . A dialog box appears. Click Upload, and then select the new test plan .xml file. Then, click . A successful upload message is displayed. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. In the Role field, select System under test and click Submit . By default, the file is uploaded to path: /var/rhcert/plans/<testplanfile.xml> A.1.6. Running the certification tests using Cockpit Prerequisites You have prepared the system under test. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab and click on the host on which you want to run the tests. Click the Terminal tab and select Run. A list of recommended tests based on the test plan uploaded displays. The final test plan to run is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . A.1.7. Reviewing and downloading the results file of the executed test plan Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab to view the test results generated. Optional: Click Preview to view the results of each test. Click Download beside the result files. By default, the result file is saved as /var/rhcert/save/hostname-date-time.xml . A.1.8. Submitting the test results from Cockpit to the Red Hat Certification Portal Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab and select the case number from the displayed list. For the authorized users click Submit . A message displays confirming the successful upload of the test result file. For non-authorized users see, Uploading the results file of the executed test plan to Red Hat Certification portal . The test result file of the executed test plan will be uploaded to the Red Hat Certification portal.
[ "dnf install redhat-certification-cockpit" ]
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/appendix-for-non-container-certification_running-certification-tests-by-using-cli-and-downloading-the-results-file
Appendix B. Address Setting Configuration Elements
Appendix B. Address Setting Configuration Elements The table below lists all of the configuration elements of an address-setting . Note that some elements are marked DEPRECATED. Use the suggested replacement to avoid potential issues. Table B.1. Address Setting Elements Name Description address-full-policy Determines what happens when an address configured with a max-size-bytes becomes full. The available policies are: PAGE : messages sent to a full address will be paged to disk. DROP : messages sent to a full address will be silently dropped. FAIL : messages sent to a full address will be dropped and the message producers will receive an exception. BLOCK : message producers will block when they try and send any further messages. Note The BLOCK policy works only for the AMQP, OpenWire, and Core Protocol protocols because they feature flow control. auto-create-addresses Whether to automatically create addresses when a client sends a message to or attempts to consume a message from a queue mapped to an address that does not exist a queue. The default value is true . auto-create-dead-letter-resources Specifies whether the broker automatically creates a dead letter address and queue to receive undelivered messages. The default value is false . If the parameter is set to true , the broker automatically creates an <address> element that defines a dead letter address and an associated dead letter queue. The name of the automatically-created <address> element matches the name value that you specify for <dead-letter-address> . auto-create-jms-queues DEPRECATED: Use auto-create-queues instead. Determines whether this broker should automatically create a JMS queue corresponding to the address settings match when a JMS producer or a consumer tries to use such a queue. The default value is false . auto-create-jms-topics DEPRECATED: Use auto-create-queues instead. Determines whether this broker should automatically create a JMS topic corresponding to the address settings match when a JMS producer or a consumer tries to use such a queue. The default value is false . auto-create-queues Whether to automatically create a queue when a client sends a message to or attempts to consume a message from a queue. The default value is true . auto-delete-addresses Whether to delete auto-created addresses when the broker no longer has any queues. The default value is true . auto-delete-jms-queues DEPRECATED: Use auto-delete-queues instead. Determines whether AMQ Broker should automatically delete auto-created JMS queues when they have no consumers and no messages. The default value is false . auto-delete-jms-topics DEPRECATED: Use auto-delete-queues instead. Determines whether AMQ Broker should automatically delete auto-created JMS topics when they have no consumers and no messages. The default value is false . auto-delete-queues Whether to delete auto-created queues when the queue has no consumers and no messages. The default value is true . config-delete-addresses When the configuration file is reloaded, this setting specifies how to handle an address (and its queues) that has been deleted from the configuration file. You can specify the following values: OFF (default) The address is not deleted when the configuration file is reloaded. FORCE The address and its queues are deleted when the configuration file is reloaded. If there are any messages in the queues, they are removed also. config-delete-queues When the configuration file is reloaded, this setting specifies how to handle queues that have been deleted from the configuration file. You can specify the following values: OFF (default) The queue is not deleted when the configuration file is reloaded. FORCE The queue is deleted when the configuration file is reloaded. If there are any messages in the queue, they are removed also. dead-letter-address The address to which the broker sends dead messages. dead-letter-queue-prefix Prefix that the broker applies to the name of an automatically-created dead letter queue. The default value is DLQ. dead-letter-queue-suffix Suffix that the broker applies to an automatically-created dead letter queue. The default value is not defined (that is, the broker applies no suffix). default-address-routing-type The routing-type used on auto-created addresses. The default value is MULTICAST . default-max-consumers The maximum number of consumers allowed on this queue at any one time. The default value is 200 . default-purge-on-no-consumers Whether to purge the contents of the queue once there are no consumers. The default value is false . default-queue-routing-type The routing-type used on auto-created queues. The default value is MULTICAST . enable-metrics Specifies whether a configured metrics plugin such as the Prometheus plugin collects metrics for a matching address or set of addresses. The default value is true . expiry-address The address that will receive expired messages. expiry-delay Defines the expiration time in milliseconds that will be used for messages using the default expiration time. The default value is -1 , which is means no expiration time. last-value-queue Whether a queue uses only last values or not. The default value is false . management-browse-page-size How many messages a management resource can browse. The default value is 200 . max-delivery-attempts how many times to attempt to deliver a message before sending to dead letter address. The default is 10 . max-redelivery-delay Maximum value for the redelivery-delay, in milliseconds. max-size-bytes The maximum memory size for this address, specified in bytes. Used when the address-full-policy is PAGING , BLOCK , or FAIL , this value is specified in byte notation such as "K", "Mb", and "GB". The default value is -1 , which denotes infinite bytes. This parameter is used to protect broker memory by limiting the amount of memory consumed by a particular address space. This setting does not represent the total amount of bytes sent by the client that are currently stored in broker address space. It is an estimate of broker memory utilization. This value can vary depending on runtime conditions and certain workloads. It is recommended that you allocate the maximum amount of memory that can be afforded per address space. Under typical workloads, the broker requires approximately 150% to 200% of the payload size of the outstanding messages in memory. max-size-bytes-reject-threshold Used when the address-full-policy is BLOCK . The maximum size, in bytes, that an address can reach before the broker begins to reject messages. Works in combination with max-size-bytes for the AMQP protocol only. The default value is -1 , which means no limit. message-counter-history-day-limit How many days to keep a message counter history for this address. The default value is 0 . page-max-cache-size The number of page files to keep in memory to optimize I/O during paging navigation. The default value is 5 . page-size-bytes The paging size in bytes. Also supports byte notation like K , Mb , and GB . The default value is 10485760 bytes, almost 10.5 MB. redelivery-delay The time, in milliseconds, to wait before redelivering a cancelled message. The default value is 0 . redelivery-delay-multiplier Multiplier to apply to the redelivery-delay parameter. The default value is 1.0 . redistribution-delay Defines how long to wait in milliseconds after the last consumer is closed on a queue before redistributing any messages. The default value is -1 . send-to-dla-on-no-route When set to true , a message will be sent to the configured dead letter address if it cannot be routed to any queues. The default value is false . slow-consumer-check-period How often to check, in seconds, for slow consumers. The default value is 5 . slow-consumer-policy Determines what happens when a slow consumer is identified. Valid options are KILL or NOTIFY . KILL kills the consumer's connection, which impacts any client threads using that same connection. NOTIFY sends a CONSUMER_SLOW management notification to the client. The default value is NOTIFY . slow-consumer-threshold The minimum rate of message consumption allowed before a consumer is considered slow. Measured in messages-per-second. The default value is -1 , which is unbounded.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/configuring_amq_broker/address_setting_attributes
Chapter 5. Creating Multus networks
Chapter 5. Creating Multus networks OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 5.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Requirements for Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. Note Network attachment definitions can only use the whereabouts IP address management (IPAM), and it must specify the range field. ipRanges and plugin chaining are not supported. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ).
[ "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.200.0/24\", \"routes\": [ {\"dst\": \"NODE_IP_CIDR\"} ] } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/creating-multus-networks_rhodf
8.2.4. Installing Packages
8.2.4. Installing Packages Yum allows you to install both a single package and multiple packages, as well as a package group of your choice. Installing Individual Packages To install a single package and all of its non-installed dependencies, enter a command in the following form: yum install package_name You can also install multiple packages simultaneously by appending their names as arguments: yum install package_name package_name If you are installing packages on a multilib system, such as an AMD64 or Intel 64 machine, you can specify the architecture of the package (as long as it is available in an enabled repository) by appending .arch to the package name. For example, to install the sqlite package for i686 , type: You can use glob expressions to quickly install multiple similarly-named packages: In addition to package names and glob expressions, you can also provide file names to yum install . If you know the name of the binary you want to install, but not its package name, you can give yum install the path name: yum then searches through its package lists, finds the package which provides /usr/sbin/named , if any, and prompts you as to whether you want to install it. Note If you know you want to install the package that contains the named binary, but you do not know in which bin or sbin directory is the file installed, use the yum provides command with a glob expression: yum provides "*/ file_name " is a common and useful trick to find the package(s) that contain file_name . Installing a Package Group A package group is similar to a package: it is not useful by itself, but installing one pulls a group of dependent packages that serve a common purpose. A package group has a name and a groupid . The yum grouplist -v command lists the names of all package groups, and, to each of them, their groupid in parentheses. The groupid is always the term in the last pair of parentheses, such as kde-desktop in the following example: You can install a package group by passing its full group name (without the groupid part) to groupinstall : yum groupinstall group_name You can also install by groupid: yum groupinstall groupid You can even pass the groupid (or quoted name) to the install command if you prepend it with an @ -symbol (which tells yum that you want to perform a groupinstall ): yum install @ group For example, the following are alternative but equivalent ways of installing the KDE Desktop group:
[ "~]# yum install sqlite.i686", "~]# yum install perl-Crypt-\\*", "~]# yum install /usr/sbin/named", "~]# yum provides \"*bin/named\" Loaded plugins: product-id, refresh-packagekit, subscription-manager Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 32:bind-9.7.0-4.P1.el6.x86_64 : The Berkeley Internet Name Domain (BIND) : DNS (Domain Name System) server Repo : rhel Matched from: Filename : /usr/sbin/named", "~]# yum -v grouplist kde\\* Loading \"product-id\" plugin Loading \"refresh-packagekit\" plugin Loading \"subscription-manager\" plugin Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 Config time: 0.123 Yum Version: 3.2.29 Setting up Group Process Looking for repo options for [rhel] rpmdb time: 0.001 group time: 1.291 Available Groups: KDE Desktop (kde-desktop) Done", "~]# yum groupinstall \"KDE Desktop\" ~]# yum groupinstall kde-desktop ~]# yum install @kde-desktop" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-installing
Chapter 2. Installing security updates
Chapter 2. Installing security updates In RHEL, you can install a specific security advisory and all available security updates. You can also configure the system to download and install security updates automatically. 2.1. Installing all available security updates To keep the security of your system up to date, you can install all currently available security updates using the yum utility. Prerequisites A Red Hat subscription is attached to the host. Procedure Install security updates using yum utility: Without the --security parameter, yum update installs all updates, including bug fixes and enhancements. Confirm and start the installation by pressing y : Optional: List processes that require a manual restart of the system after installing the updated packages: The command lists only processes that require a restart, and not services. That is, you cannot restart processes listed using the systemctl utility. For example, the bash process in the output is terminated when the user that owns this process logs out. 2.2. Installing a security update provided by a specific advisory In certain situations, you might want to install only specific updates. For example, if a specific service can be updated without scheduling a downtime, you can install security updates for only this service, and install the remaining security updates later. Prerequisites A Red Hat subscription is attached to the host. You know the ID of the security advisory that you want to update. For more information, see the Identifying the security advisory updates section. Procedure Install a specific advisory, for example: Alternatively, update to apply a specific advisory with a minimal version change by using the yum upgrade-minimal command, for example: Confirm and start the installation by pressing y : Optional: List the processes that require a manual restart of the system after installing the updated packages: The command lists only processes that require a restart, and not services. This means that you cannot restart all processes listed by using the systemctl utility. For example, the bash process in the output is terminated when the user that owns this process logs out. 2.3. Installing security updates automatically You can configure your system so that it automatically downloads and installs all security updates. Prerequisites A Red Hat subscription is attached to the host. The dnf-automatic package is installed. Procedure In the /etc/dnf/automatic.conf file, in the [commands] section, make sure the upgrade_type option is set to either default or security : Enable and start the systemd timer unit: Verification Verify that the timer is enabled: Additional resources dnf-automatic(8) man page on your system
[ "yum update --security", "... Transaction Summary =========================================== Upgrade ... Packages Total download size: ... M Is this ok [y/d/N]: y", "yum needs-restarting 1107 : /usr/sbin/rsyslogd -n 1199 : -bash", "yum update --advisory=RHSA-2019:0997", "yum upgrade-minimal --advisory=RHSA-2019:0997", "... Transaction Summary =========================================== Upgrade ... Packages Total download size: ... M Is this ok [y/d/N]: y", "yum needs-restarting 1107 : /usr/sbin/rsyslogd -n 1199 : -bash", "What kind of upgrade to perform: default = all available upgrades security = only the security upgrades upgrade_type = security", "systemctl enable --now dnf-automatic-install.timer", "systemctl status dnf-automatic-install.timer" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_and_monitoring_security_updates/installing-security-updates_managing-and-monitoring-security-updates
Telemetry data collection
Telemetry data collection Red Hat Developer Hub 1.3 Collecting and analyzing telemetry data to enhance Red Hat Developer Hub experience Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/telemetry_data_collection/index
Chapter 18. Configuring JAX-RS Endpoints
Chapter 18. Configuring JAX-RS Endpoints Abstract This chapter explains how to instantiate and configure JAX-RS server endpoints in Blueprint XML and in Spring XML, and also how to instantiate and configure JAX-RS client endpoints (client proxy beans) in XML 18.1. Configuring JAX-RS Server Endpoints 18.1.1. Defining a JAX-RS Server Endpoint Basic server endpoint definition To define a JAX-RS server endpoint in XML, you need to specify at least the following: A jaxrs:server element, which is used to define the endpoint in XML. Note that the jaxrs: namespace prefix maps to different namespaces in Blueprint and in Spring respectively. The base URL of the JAX-RS service, using the address attribute of the jaxrs:server element. Note that there are two different ways of specifying the address URL, which affects how the endpoint gets deployed: As a relative URL -for example, /customers . In this case, the endpoint is deployed into the default HTTP container, and the endpoint's base URL is implicitly obtained by combining the CXF servlet base URL with the specified relative URL. For example, if you deploy a JAX-RS endpoint to the Fuse container, the specified /customers URL would get resolved to the URL, http://Hostname:8181/cxf/customers (assuming that the container is using the default 8181 port). As an absolute URL - for example, http://0.0.0.0:8200/cxf/customers . In this case, a new HTTP listener port is opened for the JAX-RS endpoint (if it is not already open). For example, in the context of Fuse, a new Undertow container would implicitly be created to host the JAX-RS endpoint. The special IP address, 0.0.0.0 , acts as a wildcard, matching any of the hostnames assigned to the current host (which can be useful on multi-homed host machines). One or more JAX-RS root resource classes, which provide the implementation of the JAX-RS service. The simplest way to specify the resource classes is to list them inside a jaxrs:serviceBeans element. Blueprint example The following Blueprint XML example shows how to define a JAX-RS endpoint, which specifies the relative address, /customers (so that it deploys into the default HTTP container) and is implemented by the service.CustomerService resource class: Blueprint XML namespaces To define a JAX-RS endpoint in Blueprint, you typically require at least the following XML namespaces: Prefix Namespace (default) http://www.osgi.org/xmlns/blueprint/v1.0.0 cxf http://cxf.apache.org/blueprint/core jaxrs http://cxf.apache.org/blueprint/jaxrs Spring example The following Spring XML example shows how to define a JAX-RS endpoint, which specifies the relative address, /customers (so that it deploys into the default HTTP container) and is implemented by the service.CustomerService resource class: Spring XML namespaces To define a JAX-RS endpoint in Spring, you typically require at least the following XML namespaces: Prefix Namespace (default) http://www.springframework.org/schema/beans cxf http://cxf.apache.org/core jaxrs http://cxf.apache.org/jaxrs Auto-discovery in Spring XML (Spring only) Instead of specifying the JAX-RS root resource classes explicitly, Spring XML enables you to configure auto-discovery, so that specific Java packages are searched for resource classes (classes annotated by @Path ) and all of the discovered resource classes are automatically attached to the endpoint. In this case, you need to specify just the address attribute and the basePackages attribute in the jaxrs:server element. For example, to define a JAX-RS endpoint which uses all of the JAX-RS resource classes under the a.b.c Java package, you can define the endpoint in Spring XML, as follows: The auto-discovery mechanism also discovers and installs into the endpoint any JAX-RS provider classes that it finds under the specified Java packages. Lifecycle management in Spring XML (Spring only) Spring XML enables you to control the lifecycle of beans by setting the scope attribute on a bean element. The following scope values are supported by Spring: singleton (Default) Creates a single bean instance, which is used everywhere and lasts for the entire lifetime of the Spring container. prototype Creates a new bean instance every time the bean is injected into another bean or when a bean is obtained by invoking getBean() on the bean registry. request (Only available in a Web-aware container) Creates a new bean instance for every request invoked on the bean. session (Only available in a Web-aware container) Creates a new bean for the lifetime of a single HTTP session. globalSession (Only available in a Web-aware container) Creates a new bean for the lifetime of a single HTTP session that is shared between portlets. For more details about Spring scopes, please consult the Spring framework documentation on Bean scopes . Note that Spring scopes do not work properly , if you specify JAX-RS resource beans through the jaxrs:serviceBeans element. If you specify the scope attribute on the resource beans in this case, the scope attribute is effectively ignored. In order to make bean scopes work properly within a JAX-RS server endpoint, you require a level of indirection that is provided by a service factory. The simplest way to configure bean scopes is to specify resource beans using the beanNames attribute on the jaxrs:server element, as follows: Where the preceding example configures two resource beans, customerBean1 and customerBean2 . The beanNames attribute is specified as a space-separated list of resource bean IDs. For the ultimate degree of flexibility, you have the option of defining service factory objects explicitly , when you configure the JAX-RS server endpoint, using the jaxrs:serviceFactories element. This more verbose approach has the advantage that you can replace the default service factory implementation with your custom implementation, thus giving you ultimate control over the bean lifecycle. The following example shows how to configure the two resource beans, customerBean1 and customerBean2 , using this approach: Note If you specify a non-singleton lifecycle, it is often a good idea to implement and register a org.apache.cxf.service.Invoker bean (where the instance can be registered by referencing it from a jaxrs:server/jaxrs:invoker element). Attaching a WADL document You can optionally associate a WADL document with the JAX-RS server endpoint using the docLocation attribute on the jaxrs:server element. For example: Schema validation If you have some external XML schemas, for describing message content in JAX-B format, you can associate these external schemas with the JAX-RS server endpoint through the jaxrs:schemaLocations element. For example, if you have associated the server endpoint with a WADL document and you also want to enable schema validation on incoming messages, you can specify associated XML schema files as follows: Alternatively, if you want to include all of the schema files, *.xsd , in a given directory, you can just specify the directory name, as follows: Specifying schemas in this way is generally useful for any kind of functionality that requires access to the JAX-B schemas. Specifying the data binding You can use the jaxrs:dataBinding element to specify the data binding that encodes the message body in request and reply messages. For example, to specify the JAX-B data binding, you could configure a JAX-RS endpoint as follows: Or to specify the Aegis data binding, you could configure a JAX-RS endpoint as follows: Using the JMS transport It is possible to configure JAX-RS to use a JMS messaging library as a transport protocol, instead of HTTP. Because JMS itself is not a transport protocol, the actual messaging protocol depends on the particular JMS implementation that you configure. For example, the following Spring XML example shows how to configure a JAX-RS server endpoint to use the JMS transport protocol: Note the following points about the preceding example: JMS implementation -the JMS implementation is provided by the ConnectionFactory bean, which instantiates an Apache ActiveMQ connection factory object. After you instantiate the connection factory, it is automatically installed as the default JMS implementation layer. JMS conduit or destination object -Apache CXF implicitly instantiates a JMS conduit object (to represent a JMS consumer) or a JMS destination object (to represent a JMS provider). This object must be uniquely identified by a QName, which is defined through the attribute setttings xmlns:s="http://books.com" (defining the namespace prefix) and serviceName="s:BookService" (defining the QName). Transport ID -to select the JMS transport, the transportId attribute must be set to http://cxf.apache.org/transports/jms . JMS address -the jaxrs:server/@address attribute uses a standardized syntax to specify the JMS queue or JMS topic to send to. For details of this syntax, see https://tools.ietf.org/id/draft-merrick-jms-uri-06.txt . Extension mappings and language mappings A JAX-RS server endpoint can be configured so that it automatically maps a file suffix (appearing in the URL) to a MIME content type header, and maps a language suffix to a language type header. For example, consider a HTTP request of the following form: You can configure the JAX-RS server endpoint to map the .xml suffix automatically, as follows: When the preceding server endpoint receives the HTTP request, it automatically creates a new content type header of type, application/xml , and strips the .xml suffix from the resource URL. For the language mapping, consider a HTTP request of the following form: You can configure the JAX-RS server endpoint to map the .en suffix automatically, as follows: When the preceding server endpoint receives the HTTP request, it automatically creates a new accept language header with the value, en-gb , and strips the .en suffix from the resource URL. 18.1.2. jaxrs:server Attributes Attributes Table 18.1, "JAX-RS Server Endpoint Attributes" describes the attributes available on the jaxrs:server element. Table 18.1. JAX-RS Server Endpoint Attributes Attribute Description id Specifies a unique identifier that other configuration elements can use to refer to the endpoint. address Specifies the address of an HTTP endpoint. This value will override the value specified in the services contract. basePackages (Spring only) Enables auto-discovery, by specifying a comma-separated list of Java packages, which are searched to discover JAX-RS root resource classes and/or JAX-RS provider classes. beanNames Specifies a space-separated list of bean IDs of JAX-RS root resource beans. In the context of Spring XML, it is possible to define a root resource beans' lifecycle by setting the scope attribute on the root resource bean element. bindingId Specifies the ID of the message binding the service uses. A list of valid binding IDs is provided in Chapter 23, Apache CXF Binding IDs . bus Specifies the ID of the Spring bean configuring the bus used to manage the service endpoint. This is useful when configuring several endpoints to use a common set of features. docLocation Specifies the location of an external WADL document. modelRef Specifies a model schema as a classpath resource (for example, a URL of the form classpath:/path/to/model.xml ). For details of how to define a JAX-RS model schema, see Section 18.3, "Defining REST Services with the Model Schema" . publish Specifies if the service should be automatically published. If set to false , the developer must explicitly publish the endpoint. publishedEndpointUrl Specifies the URL base address, which gets inserted into the wadl:resources/@base attribute of the auto-generated WADL interface. serviceAnnotation (Spring only) Specifies the service annotation class name for auto-discovery in Spring. When used in combination with the basePackages property, this option restricts the collection of auto-discovered classes to include only the classes that are annotated by this annotation type. guess!! Is this correct? serviceClass Specifies the name of a JAX-RS root resource class (which implements a JAX-RS service). In this case, the class is instantiated by Apache CXF, not by Blueprint or Spring. If you want to instantiate the class in Blueprint or Spring, use the jaxrs:serviceBeans child element instead. serviceName Specifies the service QName (using the format ns : name ) for the JAX-RS endpoint in the special case where a JMS transport is used. For details, see the section called "Using the JMS transport" . staticSubresourceResolution If true , disables dynamic resolution of static sub-resources. Default is false . transportId For selecting a non-standard transport layer (in place of HTTP). In particular, you can select the JMS transport by setting this property to http://cxf.apache.org/transports/jms . For details, see the section called "Using the JMS transport" . abstract (Spring only) Specifies if the bean is an abstract bean. Abstract beans act as parents for concrete bean definitions and are not instantiated. The default is false . Setting this to true instructs the bean factory not to instantiate the bean. depends-on (Spring only) Specifies a list of beans that the endpoint depends on being instantiated before the endpoint can be instantiated. 18.1.3. jaxrs:server Child Elements Child elements Table 18.2, "JAX-RS Server Endpoint Child Elements" describes the child elements of the jaxrs:server element. Table 18.2. JAX-RS Server Endpoint Child Elements Element Description jaxrs:executor Specifies a Java Executor (thread pool implementation) that is used for the service. This is specified using an embedded bean definition. jaxrs:features Specifies a list of beans that configure advanced features of Apache CXF. You can provide either a list of bean references or a list of embedded beans. jaxrs:binding Not used. jaxrs:dataBinding Specifies the class implementing the data binding used by the endpoint. This is specified using an embedded bean definition. For more details, see the section called "Specifying the data binding" . jaxrs:inInterceptors Specifies a list of interceptors that process inbound requests. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxrs:inFaultInterceptors Specifies a list of interceptors that process inbound fault messages. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxrs:outInterceptors Specifies a list of interceptors that process outbound replies. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxrs:outFaultInterceptors Specifies a list of interceptors that process outbound fault messages. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxrs:invoker Specifies an implementation of the org.apache.cxf.service.Invoker interface used by the service. [a] jaxrs:serviceFactories Provides you with the maximum degree of control over the lifecycle of the JAX-RS root resources associated with this endpoint. The children of this element (which must be instances of org.apache.cxf.jaxrs.lifecycle.ResourceProvider type) are used to create JAX-RS root resource instances. jaxrs:properties Specifies a Spring map of properties that are passed along to the endpoint. These properties can be used to control features like enabling MTOM support. jaxrs:serviceBeans The children of this element are instances of ( bean element) or references to ( ref element) JAX-RS root resources. Note that in this case the scope attribute (Spring only) , if present in the bean element, is ignored. jaxrs:modelBeans Consists of a list of references to one or more org.apache.cxf.jaxrs.model.UserResource beans, which are the basic elements of a resource model (corresponding to jaxrs:resource elements). For details, see Section 18.3, "Defining REST Services with the Model Schema" . jaxrs:model Defines a resource model directly in this endpoint (that is, this jaxrs:model element can contain one or more jaxrs:resource elements). For details, see Section 18.3, "Defining REST Services with the Model Schema" . jaxrs:providers Enables you to register one or more custom JAX-RS providers with this endpoint. The children of this element are instances of ( bean element) or references to ( ref element) JAX-RS providers. jaxrs:extensionMappings When the URL of a REST invocation ends in a file extension, you can use this element to associate it automatically with a particular content type. For example, the .xml file extension could be associated with the application/xml content type. For details, see the section called "Extension mappings and language mappings" . jaxrs:languageMappings When the URL of a REST invocation ends in a language suffix, you can use this element to map this to a particular language. For example, the .en language suffix could be associated with the en-GB language. For details, see the section called "Extension mappings and language mappings" . jaxrs:schemaLocations Specifies one or more XML schemas used for validating XML message content. This element can contain one or more jaxrs:schemaLocation elements, each specifying the location of an XML schema file (usually as a classpath URL). For details, see the section called "Schema validation" . jaxrs:resourceComparator Enables you to register a custom resource comparator, which implements the algorithm used to match an incoming URL path to a particular resource class or method. jaxrs:resourceClasses (Blueprint only) Can be used instead of the jaxrs:server/@serviceClass attribute, if you want to create multiple resources from class names. The children of jaxrs:resourceClasses must be class elements with a name attribute set to the name of the resource class. In this case, the classes are instantiated by Apache CXF, not by Blueprint or Spring. [a] The Invoker implementation controls how a service is invoked. For example, it controls whether each request is handled by a new instance of the service implementation or if state is preserved across invocations. 18.2. Configuring JAX-RS Client Endpoints 18.2.1. Defining a JAX-RS Client Endpoint Injecting client proxies The main point of instantiating a client proxy bean in an XML language (Blueprint XML or Spring XML) is in order to inject it into another bean, which can then use the client proxy to invoke the REST service. To create a client proxy bean in XML, use the jaxrs:client element. Namespaces The JAX-RS client endpoint is defined using a different XML namespace from the server endpoint. The following table shows which namespace to use for which XML language: XML Language Namespace for client endpoint Blueprint http://cxf.apache.org/blueprint/jaxrs-client Spring http://cxf.apache.org/jaxrs-client Basic client endpoint definition The following example shows how to create a client proxy bean in Blueprint XML or Spring XML: Where you must set the following attributes to define the basic client endpoint: id The bean ID of the client proxy can be used to inject the client proxy into other beans in your XML configuration. address The address attribute specifies the base URL of the REST invocations. serviceClass The serviceClass attribute provides a description of the REST service by specifying a root resource class (annotated by @Path ). In fact, this is a server class, but it is not used directly by the client. The specified class is used only for its metadata (through Java reflection and JAX-RS annotations), which is used to construct the client proxy dynamically. Specifying headers You can add HTTP headers to the client proxy's invocations using the jaxrs:headers child elements, as follows: 18.2.2. jaxrs:client Attributes Attributes Table 18.3, "JAX-RS Client Endpoint Attributes" describes the attributes available on the jaxrs:client element. Table 18.3. JAX-RS Client Endpoint Attributes Attribute Description address Specifies the HTTP address of the endpoint where the consumer will make requests. This value overrides the value set in the contract. bindingId Specifies the ID of the message binding the consumer uses. A list of valid binding IDs is provided in Chapter 23, Apache CXF Binding IDs . bus Specifies the ID of the Spring bean configuring the bus managing the endpoint. inheritHeaders Specifies whether the headers set for this proxy will be inherited, if a subresource proxy is created from this proxy. Default is false . username Specifies the username used for simple username/password authentication. password Specifies the password used for simple username/password authentication. modelRef Specifies a model schema as a classpath resource (for example, a URL of the form classpath:/path/to/model.xml ). For details of how to define a JAX-RS model schema, see Section 18.3, "Defining REST Services with the Model Schema" . serviceClass Specifies the name of a service interface or a resource class (that is annotated with @PATH ), re-using it from the JAX-RS server implementation. In this case, the specified class is not invoked directly (it is actually a server class). The specified class is used only for its metadata (through Java reflection and JAX-RS annotations), which is used to construct the client proxy dynamically. serviceName Specifies the service QName (using the format ns : name ) for the JAX-RS endpoint in the special case where a JMS transport is used. For details, see the section called "Using the JMS transport" . threadSafe Specifies whether or not the client proxy is thread-safe. Default is false . transportId For selecting a non-standard transport layer (in place of HTTP). In particular, you can select the JMS transport by setting this property to http://cxf.apache.org/transports/jms . For details, see the section called "Using the JMS transport" . abstract (Spring only) Specifies if the bean is an abstract bean. Abstract beans act as parents for concrete bean definitions and are not instantiated. The default is false . Setting this to true instructs the bean factory not to instantiate the bean. depends-on (Spring only) Specifies a list of beans that the endpoint depends on being instantiated before it can be instantiated. 18.2.3. jaxrs:client Child Elements Child elements Table 18.4, "JAX-RS Client Endpoint Child Elements" describes the child elements of the jaxrs:client element. Table 18.4. JAX-RS Client Endpoint Child Elements Element Description jaxrs:executor jaxrs:features Specifies a list of beans that configure advanced features of Apache CXF. You can provide either a list of bean references or a list of embedded beans. jaxrs:binding Not used. jaxrs:dataBinding Specifies the class implementing the data binding used by the endpoint. This is specified using an embedded bean definition. For more details, see the section called "Specifying the data binding" . jaxrs:inInterceptors Specifies a list of interceptors that process inbound responses. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxrs:inFaultInterceptors Specifies a list of interceptors that process inbound fault messages. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxrs:outInterceptors Specifies a list of interceptors that process outbound requests. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxrs:outFaultInterceptors Specifies a list of interceptors that process outbound fault messages. For more information see Part VII, "Developing Apache CXF Interceptors" . jaxrs:properties Specifies a map of properties that are passed to the endpoint. jaxrs:providers Enables you to register one or more custom JAX-RS providers with this endpoint. The children of this element are instances of ( bean element) or references to ( ref element) JAX-RS providers. jaxrs:modelBeans Consists of a list of references to one or more org.apache.cxf.jaxrs.model.UserResource beans, which are the basic elements of a resource model (corresponding to jaxrs:resource elements). For details, see Section 18.3, "Defining REST Services with the Model Schema" . jaxrs:model Defines a resource model directly in this endpoint (that is, a jaxrs:model element containing one or more jaxrs:resource elements). For details, see Section 18.3, "Defining REST Services with the Model Schema" . jaxrs:headers Used for setting headers on the outgoing message. For details, see the section called "Specifying headers" . jaxrs:schemaLocations Specifies one or more XML schemas used for validating XML message content. This element can contain one or more jaxrs:schemaLocation elements, each specifying the location of an XML schema file (usually as a classpath URL). For details, see the section called "Schema validation" . 18.3. Defining REST Services with the Model Schema RESTful services without annotations The JAX-RS model schema makes it possible to define RESTful services without annotating Java classes. That is, instead of adding annotations like @Path , @PathParam , @Consumes , @Produces , and so on, directly to a Java class (or interface), you can provide all of the relevant REST metadata in a separate XML file, using the model schema. This can be useful, for example, in cases where you are unable to modify the Java source that implements the service. Example model schema Example 18.1, "Sample JAX-RS Model Schema" shows an example of a model schema that defines service metadata for the BookStoreNoAnnotations root resource class. Example 18.1. Sample JAX-RS Model Schema Namespaces The XML namespace that you use to define a model schema depends on whether you are defining the corresponding JAX-RS endpoint in Blueprint XML or in Spring XML. The following table shows which namespace to use for which XML language: XML Language Namespace Blueprint http://cxf.apache.org/blueprint/jaxrs Spring http://cxf.apache.org/jaxrs How to attach a model schema to an endpoint To define and attach a model schema to an endpoint, perform the following steps: Define the model schema, using the appropriate XML namespace for your chosen injection platform (Blueprint XML or Spring XML). Add the model schema file to your project's resources, so that the schema file is available on the classpath in the final package (JAR, WAR, or OSGi bundle file). Note Alternatively, it is also possible to embed a model schema directly into a JAX-RS endpoint, using the endpoint's jaxrs:model child element. Configure the endpoint to use the model schema, by setting the endpoint's modelRef attribute to the location of the model schema on the classpath (using a classpath URL). If necessary, instantiate the root resources explicitly, using the jaxrs:serviceBeans element. You can skip this step, if the model schema references root resource classes directly (instead of referencing base interfaces). Configuration of model schema referencing a class If the model schema applies directly to root resource classes, there is no need to define any root resource beans using the jaxrs:serviceBeans element, because the model schema automatically instantiates the root resource beans. For example, given that customer-resources.xml is a model schema that associates metadata with customer resource classes, you could instantiate a customerService service endpoint as follows: Configuration of model schema referencing an interface If the model schema applies to Java interfaces (which are the base interfaces of the root resources), you must instantiate the root resource classes using the jaxrs:serviceBeans element in the endpoint. For example, given that customer-interfaces.xml is a model schema that associates metadata with customer interfaces, you could instantiate a customerService service endpoint as follows: Model Schema Reference A model schema is defined using the following XML elements: model Root element of the model schema. If you need to reference the model schema (for example, from a JAX-RS endpoint using the modelRef attribute), you should set the id attribute on this element. model/resource The resource element is used to associate metadata with a specific root resource class (or with a corresponding interface). You can define the following attributes on the resource element: Attribute Description + name The name of the resource class (or corresponding interface) to which this resource model is applied. + path The component of the REST URL path that maps to this resource. + consumes Specifies the content type (Internet media type) consumed by this resource-for example, application/xml or application/json . + produces Specifies the content type (Internet media type) produced by this resource-for example, application/xml or application/json . + model/resource/operation The operation element is used to associate metadata with Java methods. You can define the following attributes on an operation element: Attribute Description + name The name of the Java method to which this element is applied. + path The component of the REST URL path that maps to this method. This attribute value can include parameter references, for example: path="/books/{id}/chapter" , where {id} extracts the value of the id parameter from the path. + verb Specifies the HTTP verb that maps to this method. Typically one of: GET , POST , PUT , or DELETE . If the HTTP verb is not specified, it is assumed that the Java method is a sub-resource locater , which returns a reference to a sub-resource object (where the sub-resource class must also be provided with metadata using a resource element). + consumes Specifies the content type (Internet media type) consumed by this operation-for example, application/xml or application/json . + produces Specifies the content type (Internet media type) produced by this operation-for example, application/xml or application/json . + oneway If true , configures the operation to be oneway , meaning that no reply message is needed. Defaults to false . + model/resource/operation/param The param element is used extract a value from the REST URL and inject it into one of the method parameters. You can define the following attributes on a param element: Attribute Description + name The name of the Java method parameter to which this element is applied. + type Specifies how the parameter value is extracted from the REST URL or message. It can be set to one of the following values: PATH , QUERY , MATRIX , HEADER , COOKIE , FORM , CONTEXT , REQUEST_BODY . + defaultValue Default value to inject into the parameter, in case a value could not be extracted from the REST URL or message. + encoded If true , the parameter value is injected in its URI encoded form (that is, using %nn encoding). Default is false . For example, when extracting a parameter from the URL path, /name/Joe%20Bloggs with encoded set to true , the parameter is injected as Joe%20Bloggs ; otherwise, the parameter would be injected as Joe Bloggs . +
[ "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/blueprint/jaxrs\" xmlns:cxf=\"http://cxf.apache.org/blueprint/core\" xsi:schemaLocation=\" http://www.osgi.org/xmlns/blueprint/v1.0.0 https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://cxf.apache.org/blueprint/jaxrs http://cxf.apache.org/schemas/blueprint/jaxrs.xsd http://cxf.apache.org/blueprint/core http://cxf.apache.org/schemas/blueprint/core.xsd \"> <cxf:bus> <cxf:features> <cxf:logging/> </cxf:features> </cxf:bus> <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:serviceBeans> <ref component-id=\"serviceBean\" /> </jaxrs:serviceBeans> </jaxrs:server> <bean id=\"serviceBean\" class=\"service.CustomerService\"/> </blueprint>", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxrs=\"http://cxf.apache.org/jaxrs\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd\"> <jaxrs:server id=\"customerService\" address=\"/customers\"> <jaxrs:serviceBeans> <ref bean=\"serviceBean\"/> </jaxrs:serviceBeans> </jaxrs:server> <bean id=\"serviceBean\" class=\"service.CustomerService\"/> </beans>", "<jaxrs:server address=\"/customers\" basePackages=\"a.b.c\"/>", "<beans ... > <jaxrs:server id=\"customerService\" address=\"/service1\" beanNames=\"customerBean1 customerBean2\"/> <bean id=\"customerBean1\" class=\"demo.jaxrs.server.CustomerRootResource1\" scope=\"prototype\"/> <bean id=\"customerBean2\" class=\"demo.jaxrs.server.CustomerRootResource2\" scope=\"prototype\"/> </beans>", "<beans ... > <jaxrs:server id=\"customerService\" address=\"/service1\"> <jaxrs:serviceFactories> <ref bean=\"sfactory1\" /> <ref bean=\"sfactory2\" /> </jaxrs:serviceFactories> </jaxrs:server> <bean id=\"sfactory1\" class=\"org.apache.cxf.jaxrs.spring.SpringResourceFactory\"> <property name=\"beanId\" value=\"customerBean1\"/> </bean> <bean id=\"sfactory2\" class=\"org.apache.cxf.jaxrs.spring.SpringResourceFactory\"> <property name=\"beanId\" value=\"customerBean2\"/> </bean> <bean id=\"customerBean1\" class=\"demo.jaxrs.server.CustomerRootResource1\" scope=\"prototype\"/> <bean id=\"customerBean2\" class=\"demo.jaxrs.server.CustomerRootResource2\" scope=\"prototype\"/> </beans>", "<jaxrs:server address=\"/rest\" docLocation=\"wadl/bookStore.wadl\"> <jaxrs:serviceBeans> <bean class=\"org.bar.generated.BookStore\"/> </jaxrs:serviceBeans> </jaxrs:server>", "<jaxrs:server address=\"/rest\" docLocation=\"wadl/bookStore.wadl\"> <jaxrs:serviceBeans> <bean class=\"org.bar.generated.BookStore\"/> </jaxrs:serviceBeans> <jaxrs:schemaLocations> <jaxrs:schemaLocation>classpath:/schemas/a.xsd</jaxrs:schemaLocation> <jaxrs:schemaLocation>classpath:/schemas/b.xsd</jaxrs:schemaLocation> </jaxrs:schemaLocations> </jaxrs:server>", "<jaxrs:server address=\"/rest\" docLocation=\"wadl/bookStore.wadl\"> <jaxrs:serviceBeans> <bean class=\"org.bar.generated.BookStore\"/> </jaxrs:serviceBeans> <jaxrs:schemaLocations> <jaxrs:schemaLocation>classpath:/schemas/</jaxrs:schemaLocation> </jaxrs:schemaLocations> </jaxrs:server>", "<jaxrs:server id=\"jaxbbook\" address=\"/jaxb\"> <jaxrs:serviceBeans> <ref bean=\"serviceBean\" /> </jaxrs:serviceBeans> <jaxrs:dataBinding> <bean class=\"org.apache.cxf.jaxb.JAXBDataBinding\"/> </jaxrs:dataBinding> </jaxrs:server>>", "<jaxrs:server id=\"aegisbook\" address=\"/aegis\"> <jaxrs:serviceBeans> <ref bean=\"serviceBean\" /> </jaxrs:serviceBeans> <jaxrs:dataBinding> <bean class=\"org.apache.cxf.aegis.databinding.AegisDatabinding\"> <property name=\"aegisContext\"> <bean class=\"org.apache.cxf.aegis.AegisContext\"> <property name=\"writeXsiTypes\" value=\"true\"/> </bean> </property> </bean> </jaxrs:dataBinding> </jaxrs:server>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jms=\"http://cxf.apache.org/transports/jms\" xmlns:jaxrs=\"http://cxf.apache.org/jaxrs\" xsi:schemaLocation=\" http://cxf.apache.org/transports/jms http://cxf.apache.org/schemas/configuration/jms.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://cxf.apache.org/jaxrs http://cxf.apache.org/schemas/jaxrs.xsd\"> <bean class=\"org.springframework.beans.factory.config.PropertyPlaceholderConfigurer\"/> <bean id=\"ConnectionFactory\" class=\"org.apache.activemq.ActiveMQConnectionFactory\"> <property name=\"brokerURL\" value=\"tcp://localhost:USD{testutil.ports.EmbeddedJMSBrokerLauncher}\" /> </bean> <jaxrs:server xmlns:s=\"http://books.com\" serviceName=\"s:BookService\" transportId= \"http://cxf.apache.org/transports/jms\" address=\"jms:queue:test.jmstransport.text?replyToName=test.jmstransport.response\"> <jaxrs:serviceBeans> <bean class=\"org.apache.cxf.systest.jaxrs.JMSBookStore\"/> </jaxrs:serviceBeans> </jaxrs:server> </beans>", "GET /resource.xml", "<jaxrs:server id=\"customerService\" address=\"/\"> <jaxrs:serviceBeans> <bean class=\"org.apache.cxf.jaxrs.systests.CustomerService\" /> </jaxrs:serviceBeans> <jaxrs:extensionMappings> <entry key=\"json\" value=\"application/json\"/> <entry key=\"xml\" value=\"application/xml\"/> </jaxrs:extensionMappings> </jaxrs:server>", "GET /resource.en", "<jaxrs:server id=\"customerService\" address=\"/\"> <jaxrs:serviceBeans> <bean class=\"org.apache.cxf.jaxrs.systests.CustomerService\" /> </jaxrs:serviceBeans> <jaxrs:languageMappings> <entry key=\"en\" value=\"en-gb\"/> </jaxrs:languageMappings> </jaxrs:server>", "<jaxrs:client id=\"restClient\" address=\"http://localhost:8080/test/services/rest\" serviceClass=\"org.apache.cxf.systest.jaxrs.BookStoreJaxrsJaxws\"/>", "<jaxrs:client id=\"restClient\" address=\"http://localhost:8080/test/services/rest\" serviceClass=\"org.apache.cxf.systest.jaxrs.BookStoreJaxrsJaxws\" inheritHeaders=\"true\"> <jaxrs:headers> <entry key=\"Accept\" value=\"text/xml\"/> </jaxrs:headers> </jaxrs:client>", "<model xmlns=\"http://cxf.apache.org/jaxrs\"> <resource name=\"org.apache.cxf.systest.jaxrs.BookStoreNoAnnotations\" path=\"bookstore\" produces=\"application/json\" consumes=\"application/json\"> <operation name=\"getBook\" verb=\"GET\" path=\"/books/{id}\" produces=\"application/xml\"> <param name=\"id\" type=\"PATH\"/> </operation> <operation name=\"getBookChapter\" path=\"/books/{id}/chapter\"> <param name=\"id\" type=\"PATH\"/> </operation> <operation name=\"updateBook\" verb=\"PUT\"> <param name=\"book\" type=\"REQUEST_BODY\"/> </operation> </resource> <resource name=\"org.apache.cxf.systest.jaxrs.ChapterNoAnnotations\"> <operation name=\"getItself\" verb=\"GET\"/> <operation name=\"updateChapter\" verb=\"PUT\" consumes=\"application/xml\"> <param name=\"content\" type=\"REQUEST_BODY\"/> </operation> </resource> </model>", "<jaxrs:server id=\"customerService\" address=\"/customers\" modelRef=\"classpath:/org/example/schemas/customer-resources.xml\" />", "<jaxrs:server id=\"customerService\" address=\"/customers\" modelRef=\"classpath:/org/example/schemas/customer-interfaces.xml\"> <jaxrs:serviceBeans> <ref component-id=\"serviceBean\" /> </jaxrs:serviceBeans> </jaxrs:server> <bean id=\"serviceBean\" class=\"service.CustomerService\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxrsendpointconfig
Updating OpenShift Data Foundation
Updating OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.13 Instructions for cluster and storage administrators regarding upgrading Red Hat Storage Documentation Team Abstract This document explains how to update versions of Red Hat OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/updating_openshift_data_foundation/index
Chapter 2. Role Management
Chapter 2. Role Management 2.1. Role Management OpenStack uses a role-based access control (RBAC) mechanism to manage access to its resources. Roles define which actions users can perform. By default, there are two predefined roles: a member role that gets attached to a project, and an administrative role to enable non-admin users to administer the environment. Note that there are abstract levels of permission, and it is possible to create the roles the administrator needs, and configure services adequately. 2.1.1. View Roles Use the following command to list the available predefined roles. To get details for a specified role, run: Example 2.1.2. Create and Assign a Role As a cloud administrator, you can create and manage roles on the Keystone client using the following set of commands. Each OpenStack deployment must include at least one project, one user, and one role, linked together. However, users can be members of multiple projects. To assign users to multiple projects, create a role and assign that role to a user-project pair. Note that you can create a user and assign a primary project and default role in the dashboard. Note Either the name or ID can be used to specify users, roles, or projects. Create the new-role role: Example To assign a user to a project, you must assign the role to a user-project pair. To do this, obtain the user, role, and project names or IDs: List users: List roles: List projects: Assign a role to a user-project pair. Example In this example, you assign the admin role to the admin user in the demo project: Verify the role assignment for the user admin : Example 2.2. Implied Roles and Domain-specific Roles 2.2.1. Implied roles In OpenStack, access control is enforced by confirming that a user is assigned to a specific role. Until recently, those roles had to be explicitly assigned to either a user, or to a group in which the user was a member. Identity Service (keystone) has now added the concept of implied role assignments: If a user is explicitly assigned to a role, then the user could be implicitly assigned to additional roles as well. 2.2.2. Inference Rules Implied assignment is managed by role inference rules. An inference rule is written in the form superior implies subordinate . For example, a rule might state that the admin role implies the _member_ role. As a result, a user assigned to admin for a project would implicitly be assigned to the _member_ role as well. With implied roles , a user's role assignments are processed cumulatively, allowing the user to inherit the subordinate roles. This result is dependent on an inference rule being created that specifies this outcome. 2.2.2.1. Keystone Configuration For keystone to observe implied roles, the infer_roles setting must be enabled in /etc/keystone/keystone.conf : Implied roles are governed by a defined set of inference rules. These rules determine how a role assignment can result in the implied membership of another role. See Section 2.2.3.1, "Demonstration of Implied Roles" for an example. 2.2.3. Prevent Certain Roles From Being Implied You can prevent certain roles from being implied onto a user. For example, in /etc/keystone/keystone.conf , you can add a ListOpt of roles: This will prevent a user from ever being assigned a role implicitly. Instead, the user will need to be explicitly granted access to that role. 2.2.3.1. Demonstration of Implied Roles This section describes how to create an inference rule, resulting in an implied role. These rules control how one role can imply membership of another. The example rule used in the following procedure will imply that members of the admin role also have _member_ access: 2.2.3.1.1. Assign a Role to the User Retrieve the ID of a user that will have the _member_ role implied. For example: Retrieve the ID of the demo project: Retrieve the ID of the admin role: Give the User1 user admin privileges to the demo project: Confirm the admin role assignment: 2.2.3.1.2. Create the Inference Rule Now that you have granted the admin role to User1 , run the following steps to create the inference rule: First, confirm User1's current role membership: Retrieve the list of role IDs: Create the inference rule. These are currently created using curl . This example uses the IDs of the roles returned in the step. It also runs the command using the admin_token in keystone.conf : Review the results using the CLI. In this example, User1 has received implied access to the _member_ role, as indicated by ID 9fe2ff9ee4384b1894a90878d3e92bab : Review your inference rules using curl : 2.2.4. Domain-Specific Roles Domain-specific roles grant you more granular control when defining rules for roles, allowing the roles to act as aliases for the existing prior roles. Note that you cannot have a global role implying a domain-specific role. As a result, if you list the effective role assignments of a user in a project, the domain-specific roles will not be present. Domain-specific roles can be created by a user who administers their keystone domain; they do not have to be administrators of the OpenStack deployment. This means that a domain-specific role definition can be limited to a specific domain. Note Domain-specific roles cannot be used to scope a token. This can only be done with global roles. 2.2.4.1. Using Domain-Specific Roles This example describes how to create a domain specific role and review its effect. Create a domain: Create a role that specifies a domain (note that this parameter is distinct from --domain ):
[ "openstack role list +----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 4fd37c2c993a4acab8e1b5896afb8687 | SwiftOperator | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | a0f19c1381c54770ae068456c4411d82 | ResellerAdmin | | ae49e2b796ea4820ac51637be27650d8 | admin | +----------------------------------+---------------+", "openstack role show admin", "openstack role show admin +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | ae49e2b796ea4820ac51637be27650d8 | | name | admin | +-----------+----------------------------------+", "openstack role create [ROLE_NAME]", "openstack role create new-role +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 880c116b6a55464b99ca8d8d8fe26743 | | name | new-role | +-----------+----------------------------------+", "openstack user list", "openstack role list", "openstack project list", "openstack role add --project [PROJECT_NAME] --user [USER_ID] [ROLE_ID]", "openstack role add --project demo --user 895e43465b9643b9aa29df0073572bb2 ae49e2b796ea4820ac51637be27650d8", "openstack role assignment list --user [USER_ID] --project [PROJECT_ID]", "openstack role assignment list --user 895e43465b9643b9aa29df0073572bb2 --project demo +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | ae49e2b796ea4820ac51637be27650d8 | 895e43465b9643b9aa29df0073572bb2 | | 7efbdc8b4ab448b8b5aeb9fa5898ce23 | | False | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "[token] infer_roles = true", "[assignment] prohibited_implied_role = admin", "openstack user show User1 +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | ce803dd127c9489199c89ce3b68d39b4 | | name | User1 | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+", "openstack project show demo +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | default tenant | | domain_id | default | | enabled | True | | id | 2717ebc905e449b5975449c370edac69 | | is_domain | False | | name | demo | | parent_id | default | +-------------+----------------------------------+", "openstack role show admin +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 9b821b2920544be7a4d8f71fa99fcd35 | | name | admin | +-----------+----------------------------------+", "openstack role add --user User1 --project demo admin", "openstack role assignment list --user User1 --project demo --effective +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 9b821b2920544be7a4d8f71fa99fcd35 | ce803dd127c9489199c89ce3b68d39b4 | | 2717ebc905e449b5975449c370edac69 | | False | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role assignment list --user User1 --project demo --effective +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 9b821b2920544be7a4d8f71fa99fcd35 | ce803dd127c9489199c89ce3b68d39b4 | | 2717ebc905e449b5975449c370edac69 | | False | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role list +----------------------------------+---------------+ | ID | Name | +----------------------------------+---------------+ | 9b821b2920544be7a4d8f71fa99fcd35 | admin | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | ea199fe4293745719c2afd3402ed7b95 | ResellerAdmin | | fe8eba5dfd1e4f4a854ad20a150d995e | SwiftOperator | +----------------------------------+---------------+", "source overcloudrc export OS_TOKEN=`grep ^admin_token /etc/keystone/keystone.conf | awk -F'=' '{print USD2}'` curl -X PUT -H \"X-Auth-Token: USDOS_TOKEN\" -H \"Content-type: application/json\" USDOS_AUTH_URL/roles/9b821b2920544be7a4d8f71fa99fcd35/implies/9fe2ff9ee4384b1894a90878d3e92bab", "source overcloudrc openstack role assignment list --user User1 --project demo --effective +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 9b821b2920544be7a4d8f71fa99fcd35 | ce803dd127c9489199c89ce3b68d39b4 | | 2717ebc905e449b5975449c370edac69 | | False | | 9fe2ff9ee4384b1894a90878d3e92bab | ce803dd127c9489199c89ce3b68d39b4 | | 2717ebc905e449b5975449c370edac69 | | False | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "source overcloudrc export OS_TOKEN=`grep ^admin_token /etc/keystone/keystone.conf | awk -F'=' '{print USD2}'` curl -s -H \"X-Auth-Token: USDOS_TOKEN\" USDOS_AUTH_URL/role_inferences | python -mjson.tool { \"role_inferences\": [ { \"implies\": [ { \"id\": \"9fe2ff9ee4384b1894a90878d3e92bab\", \"links\": { \"self\": \"https://osp.lab.local:5000/v3/roles/9fe2ff9ee4384b1894a90878d3e92bab\" }, \"name\": \"_member_\" } ], \"prior_role\": { \"id\": \"9b821b2920544be7a4d8f71fa99fcd35\", \"links\": { \"self\": \"https://osp.lab.local:5000/v3/roles/9b821b2920544be7a4d8f71fa99fcd35\" }, \"name\": \"admin\" } } ] }", "openstack domain create corp01", "openstack role create operators --role-domain domain-corp01" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/users_and_identity_management_guide/role_management
18.7. Managing ACIs using the command line
18.7. Managing ACIs using the command line This section describes how to manage ACIs using the command line. Note Managing Directory Server ACIs is not supported in the web console. 18.7.1. Displaying ACIs Use the ldapsearch utility to display ACI using the command line. For example, to display the ACIs set on dc=example,dc=com and sub-entries: 18.7.2. Adding an ACI Use the ldapmodify utility to add an ACI. For example: 18.7.3. Deleting an ACI To delete an ACI using the command line: Display the ACIs set on the entry. See Section 18.7.1, "Displaying ACIs" . Delete the ACI: If only one aci attribute is set on the entry or you want to remove all ACIs from the entry: If multiple ACIs exist on the entry and you want to delete a specific ACI, specify the exact ACI: For further details about deleting attributes, see Section 3.1.4.3, "Deleting Attributes from an Entry" . 18.7.4. Updating an ACI To update an ACI using the command line: Delete the existing ACI. See Section 18.7.3, "Deleting an ACI" . Add a new ACI with the updated settings. See Section 18.7.2, "Adding an ACI" .
[ "ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x -b \"dc=example,dc=com\" -s sub '(aci=*)' aci", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: ou=People,dc=example,dc=com changetype: modify add: aci aci: (targetattr=\"userPassword\") (version 3.0; acl \"Allow users updating their password\"; allow (write) userdn= \"ldap:///self\";)", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: ou=People,dc=example,dc=com changetype: delete delete: aci", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: ou=People,dc=example,dc=com changetype: modify delete: aci aci: (targetattr=\"userPassword\") (version 3.0; acl \"Allow users updating their password\"; allow (write) userdn= \"ldap:///self\";)" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/managing_acis
7.163. openchange
7.163. openchange 7.163.1. RHSA-2013:0515 - Moderate: openchange security, bug fix and enhancement update Updated openchange packages that fix one security issue, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The openchange packages provide libraries to access Microsoft Exchange servers using native protocols. Evolution-MAPI uses these libraries to integrate the Evolution PIM application with Microsoft Exchange servers. Note The openchange packages have been upgraded to upstream version 1.0, which provides a number of bug fixes and enhancements over the version, including support for the rebased samba4 packages and several API changes. (BZ# 767672 , BZ# 767678 ) Security Fix CVE-2012-1182 A flaw was found in the Samba suite's Perl-based DCE/RPC IDL (PIDL) compiler. As OpenChange uses code generated by PIDL, this could have resulted in buffer overflows in the way OpenChange handles RPC calls. With this update, the code has been generated with an updated version of PIDL to correct this issue. Bug Fixes BZ# 680061 When the user tried to modify a meeting with one required attendee and himself as the organizer, a segmentation fault occurred in the memcpy() function. Consequently, the evolution-data-server application terminated unexpectedly with a segmentation fault. This bug has been fixed and evolution-data-server no longer crashes in the described scenario. BZ# 870405 Prior to this update, OpenChange 1.0 was unable to send messages with a large message body or with extensive attachment. This was caused by minor issues in OpenChange's exchange.idl definitions. This bug has been fixed and OpenChange now sends extensive messages without complications. All users of openchange are advised to upgrade to these updated packages, which fix these issues and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/openchange
Chapter 1. Ceph RESTful API
Chapter 1. Ceph RESTful API As a storage administrator, you can use the Ceph RESTful API, or simply the Ceph API, provided by the Red Hat Ceph Storage Dashboard to interact with the Red Hat Ceph Storage cluster. You can display information about the Ceph Monitors and OSDs, along with their respective configuration options. You can even create or edit Ceph pools. The Ceph API uses the following standards: HTTP 1.1 JSON MIME and HTTP Content Negotiation JWT These standards are OpenAPI 3.0 compliant, regulating the API syntax, semantics, content encoding, versioning, authentication, and authorization. 1.1. Prerequisites A healthy running Red Hat Ceph Storage cluster. Access to the node running the Ceph Manager. 1.2. Versioning for the Ceph API A main goal for the Ceph RESTful API, is to provide a stable interface. To achieve a stable interface, the Ceph API is built on the following principles: A mandatory explicit default version for all endpoints to avoid implicit defaults. Fine-grain change control per-endpoint. The expected version from a specific endpoint is stated in the HTTP header. Syntax Example If the current Ceph API server is not able to address that specific version, a 415 - Unsupported Media Type response will be returned. Using semantic versioning. Major changes are backwards incompatible. Changes might result in non-additive changes to the request, and to the response formats for a specific endpoint. Minor changes are backwards and forwards compatible. Changes consist of additive changes to the request or response formats for a specific endpoint. 1.3. Authentication and authorization for the Ceph API Access to the Ceph RESTful API goes through two checkpoints. The first is authenticating that the request is done on the behalf of a valid, and existing user. Secondly, is authorizing the previously authenticated user can do a specific action, such as creating, reading, updating, or deleting, on the target end point. Before users start using the Ceph API, they need a valid JSON Web Token (JWT). The /api/auth endpoint allows you to retrieve this token. Example This token must be used together with every API request by placing it within the Authorization HTTP header. Syntax Additional Resources See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details. 1.4. Enabling and Securing the Ceph API module The Red Hat Ceph Storage Dashboard module offers the RESTful API access to the storage cluster over an SSL-secured connection. Important If disabling SSL, then user names and passwords are sent unencrypted to the Red Hat Ceph Storage Dashboard. Prerequisites Root-level access to a Ceph Monitor node. Ensure that you have at least one ceph-mgr daemon active. If you use a firewall, ensure that TCP port 8443 , for SSL, and TCP port 8080 , without SSL, are open on the node with the active ceph-mgr daemon. Procedure Log into the Cephadm shell: Example Enable the RESTful plug-in: Configure an SSL certificate. If your organization's certificate authority (CA) provides a certificate, then set using the certificate files: Syntax Example If you want to set unique node-based certificates, then add a HOST_NAME to the commands: Example Alternatively, you can generate a self-signed certificate. However, using a self-signed certificate does not provide full security benefits of the HTTPS protocol: Warning Most modern web browsers will complain about self-signed certificates, which require you to confirm before establishing a secure connection. Create a user, set the password, and set the role: Syntax Example This example creates a user named user1 with the administrator role. Connect to the RESTful plug-in web page. Open a web browser and enter the following URL: Syntax Example If you used a self-signed certificate, confirm a security exception. Additional Resources The ceph dashboard --help command. The https:// HOST_NAME :8443/doc page, where HOST_NAME is the IP address or name of the node with the running ceph-mgr instance. The Red Hat Enterprise Linux 8 Security Hardening guide. 1.5. Questions and Answers 1.5.1. Getting Information This section describes how to use the Ceph API to view information about the storage cluster, Ceph Monitors, OSDs, pools, and hosts: Section 1.5.1.1, "How Can I View All Cluster Configuration Options?" Section 1.5.1.2, "How Can I View a Particular Cluster Configuration Option?" Section 1.5.1.3, "How Can I View All Configuration Options for OSDs?" Section 1.5.1.4, "How Can I View CRUSH Rules?" Section 1.5.1.5, "How Can I View Information about Monitors?" Section 1.5.1.6, "How Can I View Information About a Particular Monitor?" Section 1.5.1.7, "How Can I View Information about OSDs?" Section 1.5.1.8, "How Can I View Information about a Particular OSD?" Section 1.5.1.9, "How Can I Determine What Processes Can Be Scheduled on an OSD?" Section 1.5.1.10, "How Can I View Information About Pools?" Section 1.5.1.11, "How Can I View Information About a Particular Pool?" Section 1.5.1.12, "How Can I View Information About Hosts?" Section 1.5.1.13, "How Can I View Information About a Particular Host?" 1.5.1.1. How Can I View All Cluster Configuration Options? This section describes how to use the RESTful plug-in to view cluster configuration options and their values. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance CEPH_MANAGER_PORT with the TCP port number. The default TCP port number is 8443. Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 5 1.5.1.2. How Can I View a Particular Cluster Configuration Option? This section describes how to view a particular cluster option and its value. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 5 1.5.1.3. How Can I View All Configuration Options for OSDs? This section describes how to view all configuration options and their values for OSDs. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 5 1.5.1.4. How Can I View CRUSH Rules? This section describes how to view CRUSH rules. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The CRUSH Rules section in the Administration Guide for Red Hat Ceph Storage 5. 1.5.1.5. How Can I View Information about Monitors? This section describes how to view information about a particular Monitor, such as: IP address Name Quorum status The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.5.1.6. How Can I View Information About a Particular Monitor? This section describes how to view information about a particular Monitor, such as: IP address Name Quorum status The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor Enter the user name and password when prompted. 1.5.1.7. How Can I View Information about OSDs? This section describes how to view information about OSDs, such as: IP address Its pools Affinity Weight The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.5.1.8. How Can I View Information about a Particular OSD? This section describes how to view information about a particular OSD, such as: IP address Its pools Affinity Weight The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user name and password when prompted. 1.5.1.9. How Can I Determine What Processes Can Be Scheduled on an OSD? This section describes how to use the RESTful plug-in to view what processes, such as scrubbing or deep scrubbing, can be scheduled on an OSD. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user name and password when prompted. 1.5.1.10. How Can I View Information About Pools? This section describes how to view information about pools, such as: Flags Size Number of placement groups The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.5.1.11. How Can I View Information About a Particular Pool? This section describes how to view information about a particular pool, such as: Flags Size Number of placement groups The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user name and password when prompted. 1.5.1.12. How Can I View Information About Hosts? This section describes how to view information about hosts, such as: Host names Ceph daemons and their IDs Ceph version The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.5.1.13. How Can I View Information About a Particular Host? This section describes how to view information about a particular host, such as: Host names Ceph daemons and their IDs Ceph version The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field Enter the user name and password when prompted. 1.5.2. Changing Configuration This section describes how to use the Ceph API to change OSD configuration options, the state of an OSD, and information about pools: Section 1.5.2.1, "How Can I Change OSD Configuration Options?" Section 1.5.2.2, "How Can I Change the OSD State?" Section 1.5.2.3, "How Can I Reweight an OSD?" Section 1.5.2.4, "How Can I Change Information for a Pool?" 1.5.2.1. How Can I Change OSD Configuration Options? This section describes how to use the RESTful plug-in to change OSD configuration options. The curl Command On the command line, use: Replace: OPTION with the option to modify; pause , noup , nodown , noout , noin , nobackfill , norecover , noscrub , nodeep-scrub VALUE with true or false USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance OPTION with the option to modify; pause , noup , nodown , noout , noin , nobackfill , norecover , noscrub , nodeep-scrub VALUE with True or False USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.5.2.2. How Can I Change the OSD State? This section describes how to use the RESTful plug-in to change the state of an OSD. The curl Command On the command line, use: Replace: STATE with the state to change ( in or up ) VALUE with true or false USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field STATE with the state to change ( in or up ) VALUE with True or False USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.5.2.3. How Can I Reweight an OSD? This section describes how to change the weight of an OSD. The curl Command On the command line, use: Replace: VALUE with the new weight USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field VALUE with the new weight USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.5.2.4. How Can I Change Information for a Pool? This section describes how to use the RESTful plug-in to change information for a particular pool. The curl Command On the command line, use: Replace: OPTION with the option to modify VALUE with the new value of the option USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field OPTION with the option to modify VALUE with the new value of the option USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.5.3. Administering the Cluster This section describes how to use the Ceph API to initialize scrubbing or deep scrubbing on an OSD, create a pool or remove data from a pool, remove requests, or create a request: Section 1.5.3.1, "How Can I Run a Scheduled Process on an OSD?" Section 1.5.3.2, "How Can I Create a New Pool?" Section 1.5.3.3, "How Can I Remove Pools?" 1.5.3.1. How Can I Run a Scheduled Process on an OSD? This section describes how to use the RESTful API to run scheduled processes, such as scrubbing or deep scrubbing, on an OSD. The curl Command On the command line, use: Replace: COMMAND with the process ( scrub , deep-scrub , or repair ) you want to start. Verify it the process is supported on the OSD. See Section 1.5.1.9, "How Can I Determine What Processes Can Be Scheduled on an OSD?" for details. USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field COMMAND with the process ( scrub , deep-scrub , or repair ) you want to start. Verify it the process is supported on the OSD. See Section 1.5.1.9, "How Can I Determine What Processes Can Be Scheduled on an OSD?" for details. USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.5.3.2. How Can I Create a New Pool? This section describes how to use the RESTful plug-in to create a new pool. The curl Command On the command line, use: Replace: NAME with the name of the new pool NUMBER with the number of the placement groups USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the name of the new pool NUMBER with the number of the placement groups USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.5.3.3. How Can I Remove Pools? This section describes how to use the RESTful plug-in to remove a pool. This request is by default forbidden. To allow it, add the following parameter to the Ceph configuration guide. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.6. Additional Resources See Appendix A, The Ceph RESTful API specifications for specific details on the API. See the Testing the API Python script on GitHub.
[ "Accept: application/vnd.ceph.api.v MAJOR . MINOR +json", "Accept: application/vnd.ceph.api.v1.0+json", "curl -X POST \"https://example.com:8443/api/auth\" -H \"Accept: application/vnd.ceph.api.v1.0+json\" -H \"Content-Type: application/json\" -d '{\"username\": \"user1\", \"password\": \"password1\"}'", "curl -H \"Authorization: Bearer TOKEN \"", "root@host01 ~]# cephadm shell", "ceph mgr module enable dashboard", "ceph dashboard set-ssl-certificate HOST_NAME -i CERT_FILE ceph dashboard set-ssl-certificate-key HOST_NAME -i KEY_FILE", "ceph dashboard set-ssl-certificate -i dashboard.crt ceph dashboard set-ssl-certificate-key -i dashboard.key", "ceph dashboard set-ssl-certificate host01 -i dashboard.crt ceph dashboard set-ssl-certificate-key host01 -i dashboard.key", "ceph dashboard create-self-signed-cert", "echo -n \" PASSWORD \" > PATH_TO_FILE / PASSWORD_FILE ceph dashboard ac-user-create USER_NAME -i PASSWORD_FILE ROLE", "echo -n \"p@ssw0rd\" > /root/dash-password.txt ceph dashboard ac-user-create user1 -i /root/dash-password.txt administrator", "https:// HOST_NAME :8443", "https://host01:8443", "curl --silent --user USER 'https:// CEPH_MANAGER : CEPH_MANAGER_PORT /api/cluster_conf'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/cluster_conf", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/flags', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/flags', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/osd/flags", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/crush_rule'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/crush_rule'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/crush_rule', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/crush_rule', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/crush_rule", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/monitor'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/monitor'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/monitor", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/monitor/ NAME '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/monitor/ NAME '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor/ NAME ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor/ NAME ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/monitor/ NAME", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/osd", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/osd/ ID", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID /command', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID /command', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/osd/ ID /command", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/pool", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/pool/ ID", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/host'", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/host'", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/host", "curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/host/ HOST_NAME '", "curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/host/ HOST_NAME '", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host/ HOST_NAME ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host/ HOST_NAME ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "https:// CEPH_MANAGER :8080/api/host/ HOST_NAME", "echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'", "echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/flags', json={\" OPTION \": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/flags', json={\" OPTION \": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\" STATE \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "echo -En '{\" STATE \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\" STATE \": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\" STATE \": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\"reweight\": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "echo -En '{\"reweight\": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/osd/ ID ', json={\"reweight\": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\"reweight\": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/pool/ ID ', json={\" OPTION \": VALUE }, auth=(\" USER , \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/pool/ ID ', json={\" OPTION \": VALUE }, auth=(\" USER , \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\"command\": \" COMMAND \"}' | curl --request POST --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'", "echo -En '{\"command\": \" COMMAND \"}' | curl --request POST --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'", "python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/osd/ ID /command', json={\"command\": \" COMMAND \"}, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/osd/ ID /command', json={\"command\": \" COMMAND \"}, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "echo -En '{\"name\": \" NAME \", \"pg_num\": NUMBER }' | curl --request POST --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool'", "echo -En '{\"name\": \" NAME \", \"pg_num\": NUMBER }' | curl --request POST --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool'", "python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/pool', json={\"name\": \" NAME \", \"pg_num\": NUMBER }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/pool', json={\"name\": \" NAME \", \"pg_num\": NUMBER }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()", "mon_allow_pool_delete = true", "curl --request DELETE --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "curl --request DELETE --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '", "python >> import requests >> result = requests.delete('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()", "python >> import requests >> result = requests.delete('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/developer_guide/ceph-restful-api
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.26/making-open-source-more-inclusive
A.9. x86_energy_perf_policy
A.9. x86_energy_perf_policy The x86_energy_perf_policy tool allows administrators to define the relative importance of performance and energy efficiency. It is provided by the kernel-tools package. To view the current policy, run the following command: To set a new policy, run the following command: Replace profile_name with one of the following profiles. performance The processor does not sacrifice performance for the sake of saving energy. This is the default value. normal The processor tolerates minor performance compromises for potentially significant energy savings. This is a reasonable saving for most servers and desktops. powersave The processor accepts potentially significant performance decreases in order to maximize energy efficiency. For further details of how to use x86_energy_perf_policy , see the man page:
[ "x86_energy_perf_policy -r", "x86_energy_perf_policy profile_name", "man x86_energy_perf_policy" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-x86_energy_perf_policy
Chapter 4. Disabling monitoring for user-defined projects
Chapter 4. Disabling monitoring for user-defined projects As a dedicated-admin , you can disable monitoring for user-defined projects. You can also exclude individual projects from user workload monitoring. 4.1. Disabling monitoring for user-defined projects By default, monitoring for user-defined projects is enabled. If you do not want to use the built-in monitoring stack to monitor user-defined projects, you can disable it. Prerequisites You logged in to OpenShift Cluster Manager . Procedure From the OpenShift Cluster Manager Hybrid Cloud Console, select a cluster. Click the Settings tab. Click the Enable user workload monitoring check box to unselect the option, and then click Save . User workload monitoring is disabled. The Prometheus, Prometheus Operator, and Thanos Ruler components are stopped in the openshift-user-workload-monitoring project. 4.2. Excluding a user-defined project from monitoring Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring label to the project's namespace with a value of false . Procedure Add the label to the project namespace: USD oc label namespace my-project 'openshift.io/user-monitoring=false' To re-enable monitoring, remove the label from the namespace: USD oc label namespace my-project 'openshift.io/user-monitoring-' Note If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label.
[ "oc label namespace my-project 'openshift.io/user-monitoring=false'", "oc label namespace my-project 'openshift.io/user-monitoring-'" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/monitoring/sd-disabling-monitoring-for-user-defined-projects
Appendix B. Using Red Hat Maven repositories
Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat
[ "/home/ <username> /.m2/settings.xml", "C:\\Users\\<username>\\.m2\\settings.xml", "<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>", "<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>", "<repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/using_red_hat_maven_repositories
Chapter 2. An active/passive Apache HTTP Server in a Red Hat High Availability Cluster
Chapter 2. An active/passive Apache HTTP Server in a Red Hat High Availability Cluster This chapter describes how to configure an active/passive Apache HTTP server in a two-node Red Hat Enterprise Linux High Availability Add-On cluster using pcs to configure cluster resources. In this use case, clients access the Apache HTTP server through a floating IP address. The web server runs on one of two nodes in the cluster. If the node on which the web server is running becomes inoperative, the web server starts up again on the second node of the cluster with minimal service interruption. Figure 2.1, "Apache in a Red Hat High Availability Two-Node Cluster" shows a high-level overview of the cluster. The cluster is a two-node Red Hat High Availability cluster which is configured with a network power switch and with shared storage. The cluster nodes are connected to a public network, for client access to the Apache HTTP server through a virtual IP. The Apache server runs on either Node 1 or Node 2, each of which has access to the storage on which the Apache data is kept. Figure 2.1. Apache in a Red Hat High Availability Two-Node Cluster This use case requires that your system include the following components: A two-node Red Hat High Availability cluster with power fencing configured for each node. This procedure uses the cluster example provided in Chapter 1, Creating a Red Hat High-Availability Cluster with Pacemaker . A public virtual IP address, required for Apache. Shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device. The cluster is configured with an Apache resource group, which contains the cluster components that the web server requires: an LVM resource, a file system resource, an IP address resource, and a web server resource. This resource group can fail over from one node of the cluster to the other, allowing either node to run the web server. Before creating the resource group for this cluster, you will perform the following procedures: Configure an ext4 file system mounted on the logical volume my_lv , as described in Section 2.1, "Configuring an LVM Volume with an ext4 File System" . Configure a web server, as described in Section 2.2, "Web Server Configuration" . Ensure that only the cluster is capable of activating the volume group that contains my_lv , and that the volume group will not be activated outside of the cluster on startup, as described in Section 2.3, "Exclusive Activation of a Volume Group in a Cluster" . After performing these procedures, you create the resource group and the resources it contains, as described in Section 2.4, "Creating the Resources and Resource Groups with the pcs Command" . 2.1. Configuring an LVM Volume with an ext4 File System This use case requires that you create an LVM logical volume on storage that is shared between the nodes of the cluster. The following procedure creates an LVM logical volume and then creates an ext4 file system on that volume. In this example, the shared partition /dev/sdb1 is used to store the LVM physical volume from which the LVM logical volume will be created. Note LVM volumes and the corresponding partitions and devices used by cluster nodes must be connected to the cluster nodes only. Since the /dev/sdb1 partition is storage that is shared, you perform this procedure on one node only, Create an LVM physical volume on partition /dev/sdb1 . Create the volume group my_vg that consists of the physical volume /dev/sdb1 . Create a logical volume using the volume group my_vg . You can use the lvs command to display the logical volume. Create an ext4 file system on the logical volume my_lv .
[ "pvcreate /dev/sdb1 Physical volume \"/dev/sdb1\" successfully created", "vgcreate my_vg /dev/sdb1 Volume group \"my_vg\" successfully created", "lvcreate -L450 -n my_lv my_vg Rounding up size to full physical extent 452.00 MiB Logical volume \"my_lv\" created", "lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv my_vg -wi-a---- 452.00m", "mkfs.ext4 /dev/my_vg/my_lv mke2fs 1.42.7 (21-Jan-2013) Filesystem label= OS type: Linux" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-service-haaa
Chapter 14. Handling a node failure
Chapter 14. Handling a node failure As a storage administrator, you can experience a whole node failing within the storage cluster, and handling a node failure is similar to handling a disk failure. With a node failure, instead of Ceph recovering placement groups (PGs) for only one disk, all PGs on the disks within that node must be recovered. Ceph will detect that the OSDs are all down and automatically start the recovery process, known as self-healing. There are three node failure scenarios. Here is the high-level workflow for each scenario when replacing a node: Replacing the node, but using the root and Ceph OSD disks from the failed node. Disable backfilling. Replace the node, taking the disks from old node, and adding them to the new node. Enable backfilling. Replacing the node, reinstalling the operating system, and using the Ceph OSD disks from the failed node. Disable backfilling. Create a backup of the Ceph configuration. Replace the node and add the Ceph OSD disks from failed node. Configuring disks as JBOD. Install the operating system. Restore the Ceph configuration. Add the new node to the storage cluster using the Ceph Orchestrator commands and Ceph daemons are placed automatically on the respective node. Enable backfilling. Replacing the node, reinstalling the operating system, and using all new Ceph OSDs disks. Disable backfilling. Remove all OSDs on the failed node from the storage cluster. Create a backup of the Ceph configuration. Replace the node and add the Ceph OSD disks from failed node. Configuring disks as JBOD. Install the operating system. Add the new node to the storage cluster using the Ceph Orchestrator commands and Ceph daemons are placed automatically on the respective node. Enable backfilling. 14.1. Prerequisites A running Red Hat Ceph Storage cluster. A failed node. 14.2. Considerations before adding or removing a node One of the outstanding features of Ceph is the ability to add or remove Ceph OSD nodes at run time. This means that you can resize the storage cluster capacity or replace hardware without taking down the storage cluster. The ability to serve Ceph clients while the storage cluster is in a degraded state also has operational benefits. For example, you can add or remove or replace hardware during regular business hours, rather than working overtime or on weekends. However, adding and removing Ceph OSD nodes can have a significant impact on performance. Before you add or remove Ceph OSD nodes, consider the effects on storage cluster performance: Whether you are expanding or reducing the storage cluster capacity, adding or removing Ceph OSD nodes induces backfilling as the storage cluster rebalances. During that rebalancing time period, Ceph uses additional resources, which can impact storage cluster performance. In a production Ceph storage cluster, a Ceph OSD node has a particular hardware configuration that facilitates a particular type of storage strategy. Since a Ceph OSD node is part of a CRUSH hierarchy, the performance impact of adding or removing a node typically affects the performance of pools that use the CRUSH ruleset. Additional Resources See the Red Hat Ceph Storage Storage Strategies Guide for more details. 14.3. Performance considerations The following factors typically affect a storage cluster's performance when adding or removing Ceph OSD nodes: Ceph clients place load on the I/O interface to Ceph; that is, the clients place load on a pool. A pool maps to a CRUSH ruleset. The underlying CRUSH hierarchy allows Ceph to place data across failure domains. If the underlying Ceph OSD node involves a pool that is experiencing high client load, the client load could significantly affect recovery time and reduce performance. Because write operations require data replication for durability, write-intensive client loads in particular can increase the time for the storage cluster to recover. Generally, the capacity you are adding or removing affects the storage cluster's time to recover. In addition, the storage density of the node you add or remove might also affect recovery times. For example, a node with 36 OSDs typically takes longer to recover than a node with 12 OSDs. When removing nodes, you MUST ensure that you have sufficient spare capacity so that you will not reach full ratio or near full ratio . If the storage cluster reaches full ratio , Ceph will suspend write operations to prevent data loss. A Ceph OSD node maps to at least one Ceph CRUSH hierarchy, and the hierarchy maps to at least one pool. Each pool that uses a CRUSH ruleset experiences a performance impact when Ceph OSD nodes are added or removed. Replication pools tend to use more network bandwidth to replicate deep copies of the data, whereas erasure coded pools tend to use more CPU to calculate k+m coding chunks. The more copies that exist of the data, the longer it takes for the storage cluster to recover. For example, a larger pool or one that has a greater number of k+m chunks will take longer to recover than a replication pool with fewer copies of the same data. Drives, controllers and network interface cards all have throughput characteristics that might impact the recovery time. Generally, nodes with higher throughput characteristics, such as 10 Gbps and SSDs, recover more quickly than nodes with lower throughput characteristics, such as 1 Gbps and SATA drives. 14.4. Recommendations for adding or removing nodes Red Hat recommends adding or removing one OSD at a time within a node and allowing the storage cluster to recover before proceeding to the OSD. This helps to minimize the impact on storage cluster performance. Note that if a node fails, you might need to change the entire node at once, rather than one OSD at a time. To remove an OSD: Using Removing the OSD daemons using the Ceph Orchestrator . To add an OSD: Using Deploying Ceph OSDs on all available devices . Using Deploying Ceph OSDs using advanced service specification . Using Deploying Ceph OSDs on specific devices and hosts . When adding or removing Ceph OSD nodes, consider that other ongoing processes also affect storage cluster performance. To reduce the impact on client I/O, Red Hat recommends the following: Calculate capacity Before removing a Ceph OSD node, ensure that the storage cluster can backfill the contents of all its OSDs without reaching the full ratio . Reaching the full ratio will cause the storage cluster to refuse write operations. Temporarily disable scrubbing Scrubbing is essential to ensuring the durability of the storage cluster's data; however, it is resource intensive. Before adding or removing a Ceph OSD node, disable scrubbing and deep-scrubbing and let the current scrubbing operations complete before proceeding. Once you have added or removed a Ceph OSD node and the storage cluster has returned to an active+clean state, unset the noscrub and nodeep-scrub settings. Limit backfill and recovery If you have reasonable data durability, there is nothing wrong with operating in a degraded state. For example, you can operate the storage cluster with osd_pool_default_size = 3 and osd_pool_default_min_size = 2 . You can tune the storage cluster for the fastest possible recovery time, but doing so significantly affects Ceph client I/O performance. To maintain the highest Ceph client I/O performance, limit the backfill and recovery operations and allow them to take longer. You can also consider setting the sleep and delay parameters such as, osd_recovery_sleep . Increase the number of placement groups Finally, if you are expanding the size of the storage cluster, you may need to increase the number of placement groups. If you determine that you need to expand the number of placement groups, Red Hat recommends making incremental increases in the number of placement groups. Increasing the number of placement groups by a significant amount will cause a considerable degradation in performance. 14.5. Adding a Ceph OSD node To expand the capacity of the Red Hat Ceph Storage cluster, you can add an OSD node. Prerequisites A running Red Hat Ceph Storage cluster. A provisioned node with a network connection. Procedure Verify that other nodes in the storage cluster can reach the new node by its short host name. Temporarily disable scrubbing: Example Limit the backfill and recovery features: Syntax Example Extract the cluster's public SSH keys to a folder: Syntax Example Copy ceph cluster's public SSH keys to the root user's authorized_keys file on the new host: Syntax Example Add the new node to the CRUSH map: Syntax Example Add an OSD for each disk on the node to the storage cluster. Using Deploying Ceph OSDs on all available devices . Using Deploying Ceph OSDs using advanced service specification . Using Deploying Ceph OSDs on specific devices and hosts . Important When adding an OSD node to a Red Hat Ceph Storage cluster, Red Hat recommends adding one OSD daemon at a time and allowing the cluster to recover to an active+clean state before proceeding to the OSD. Additional Resources See the Setting a Specific Configuration Setting at Runtime section in the Red Hat Ceph Storage Configuration Guide for more details. See Adding a Bucket and Moving a Bucket sections in the Red Hat Ceph Storage Storage Strategies Guide for details on placing the node at an appropriate location in the CRUSH hierarchy,. 14.6. Removing a Ceph OSD node To reduce the capacity of a storage cluster, remove an OSD node. Warning Before removing a Ceph OSD node, ensure that the storage cluster can backfill the contents of all OSDs without reaching the full ratio . Reaching the full ratio will cause the storage cluster to refuse write operations. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure Check the storage cluster's capacity: Syntax Temporarily disable scrubbing: Syntax Limit the backfill and recovery features: Syntax Example Remove each OSD on the node from the storage cluster: Using Removing the OSD daemons using the Ceph Orchestrator . Important When removing an OSD node from the storage cluster, Red Hat recommends removing one OSD at a time within the node and allowing the cluster to recover to an active+clean state before proceeding to remove the OSD. After you remove an OSD, check to verify that the storage cluster is not getting to the near-full ratio : Syntax Repeat this step until all OSDs on the node are removed from the storage cluster. Once all OSDs are removed, remove the host: Using Removing hosts using the Ceph Orchestrator . Additional Resources See the Setting a specific configuration at runtime section in the Red Hat Ceph Storage Configuration Guide for more details. 14.7. Simulating a node failure To simulate a hard node failure, power off the node and reinstall the operating system. Prerequisites A healthy running Red Hat Ceph Storage cluster. Root-level access to all nodes on the storage cluster. Procedure Check the storage cluster's capacity to understand the impact of removing the node: Example Optionally, disable recovery and backfilling: Example Shut down the node. If you are changing the host name, remove the node from CRUSH map: Example Check the status of the storage cluster: Example Reinstall the operating system on the node. Add the new node: Using the Adding hosts using the Ceph Orchestrator . Optionally, enable recovery and backfilling: Example Check Ceph's health: Example Additional Resources See the Red Hat Ceph Storage Installation Guide for more details.
[ "ceph osd set noscrub ceph osd set nodeep-scrub", "ceph osd unset noscrub ceph osd unset nodeep-scrub", "osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1", "ceph osd set noscrub ceph osd set nodeep-scrub", "ceph tell DAEMON_TYPE .* injectargs -- OPTION_NAME VALUE [-- OPTION_NAME VALUE ]", "ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1", "ceph cephadm get-pub-key > ~/ PATH", "ceph cephadm get-pub-key > ~/ceph.pub", "ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2", "ssh-copy-id -f -i ~/ceph.pub root@host02", "ceph orch host add NODE_NAME IP_ADDRESS", "ceph orch host add host02 10.10.128.70", "ceph df rados df ceph osd df", "ceph osd set noscrub ceph osd set nodeep-scrub", "ceph tell DAEMON_TYPE .* injectargs -- OPTION_NAME VALUE [-- OPTION_NAME VALUE ]", "ceph tell osd.* injectargs --osd-max-backfills 1 --osd-recovery-max-active 1 --osd-recovery-op-priority 1", "ceph -s ceph df", "ceph df rados df ceph osd df", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "ceph osd crush rm host03", "ceph -s", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "ceph -s" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/handling-a-node-failure
Migrating to Data Grid 8
Migrating to Data Grid 8 Red Hat Data Grid 8.4 Migrate deployments and applications to Data Grid 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/migrating_to_data_grid_8/index
4.5.3. Network Configuration
4.5.3. Network Configuration Clicking on the Network tab displays the Network Configuration page, which provides an interface for configuring the network transport type. You can use this tab to select one of the following options: UDP Multicast and Let Cluster Choose the Multicast Address This is the default setting. With this option selected, the Red Hat High Availability Add-On software creates a multicast address based on the cluster ID. It generates the lower 16 bits of the address and appends them to the upper portion of the address according to whether the IP protocol is IPv4 or IPv6: For IPv4 - The address formed is 239.192. plus the lower 16 bits generated by Red Hat High Availability Add-On software. For IPv6 - The address formed is FF15:: plus the lower 16 bits generated by Red Hat High Availability Add-On software. Note The cluster ID is a unique identifier that cman generates for each cluster. To view the cluster ID, run the cman_tool status command on a cluster node. UDP Multicast and Specify the Multicast Address Manually If you need to use a specific multicast address, select this option enter a multicast address into the Multicast Address text box. If you do specify a multicast address, you should use the 239.192.x.x series (or FF15:: for IPv6) that cman uses. Otherwise, using a multicast address outside that range may cause unpredictable results. For example, using 224.0.0.x (which is "All hosts on the network") may not be routed correctly, or even routed at all by some hardware. If you specify or modify a multicast address, you must restart the cluster for this to take effect. For information on starting and stopping a cluster with Conga , see Section 5.4, "Starting, Stopping, Restarting, and Deleting Clusters" . Note If you specify a multicast address, make sure that you check the configuration of routers that cluster packets pass through. Some routers may take a long time to learn addresses, seriously impacting cluster performance. UDP Unicast (UDPU) As of the Red Hat Enterprise Linux 6.2 release, the nodes in a cluster can communicate with each other using the UDP Unicast transport mechanism. It is recommended, however, that you use IP multicasting for the cluster network. UDP Unicast is an alternative that can be used when IP multicasting is not available. For GFS2 deployments using UDP Unicast is not recommended. Click Apply . When changing the transport type, a cluster restart is necessary for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-network-conga-CA
Chapter 4. Understanding update channels and releases
Chapter 4. Understanding update channels and releases Update channels are the mechanism by which users declare the OpenShift Container Platform minor version they intend to update their clusters to. They also allow users to choose the timing and level of support their updates will have through the fast , stable , candidate , and eus channel options. The Cluster Version Operator uses an update graph based on the channel declaration, along with other conditional information, to provide a list of recommended and conditional updates available to the cluster. Update channels correspond to a minor version of OpenShift Container Platform. The version number in the channel represents the target minor version that the cluster will eventually be updated to, even if it is higher than the cluster's current minor version. For instance, OpenShift Container Platform 4.10 update channels provide the following recommendations: Updates within 4.10. Updates within 4.9. Updates from 4.9 to 4.10, allowing all 4.9 clusters to eventually update to 4.10, even if they do not immediately meet the minimum z-stream version requirements. eus-4.10 only: updates within 4.8. eus-4.10 only: updates from 4.8 to 4.9 to 4.10, allowing all 4.8 clusters to eventually update to 4.10. 4.10 update channels do not recommend updates to 4.11 or later releases. This strategy ensures that administrators must explicitly decide to update to the minor version of OpenShift Container Platform. Update channels control only release selection and do not impact the version of the cluster that you install. The openshift-install binary file for a specific version of OpenShift Container Platform always installs that version. OpenShift Container Platform 4.10 offers the following update channels: stable-4.10 eus-4.y (only offered for EUS versions and meant to facilitate updates between EUS versions) fast-4.10 candidate-4.10 If you do not want the Cluster Version Operator to fetch available updates from the update recommendation service, you can use the oc adm upgrade channel command in the OpenShift CLI to configure an empty channel. This configuration can be helpful if, for example, a cluster has restricted network access and there is no local, reachable update recommendation service. Warning Red Hat recommends updating to versions suggested by OpenShift Update Service only. For a minor version update, versions must be contiguous. Red Hat does not test updates to noncontiguous versions and cannot guarantee compatibility with earlier versions. 4.1. Update channels 4.1.1. fast-4.10 channel The fast-4.10 channel is updated with new versions of OpenShift Container Platform 4.10 as soon as Red Hat declares the version as a general availability (GA) release. As such, these releases are fully supported and purposed to be used in production environments. 4.1.2. stable-4.10 channel While the fast-4.10 channel contains releases as soon as their errata are published, releases are added to the stable-4.10 channel after a delay. During this delay, data is collected from multiple sources and analyzed for indications of product regressions. Once a significant number of data points have been collected, and absent negative signals, these releases are added to the stable channel. Note Since the time required to obtain a significant number of data points varies based on many factors, Service LeveL Objective (SLO) is not offered for the delay duration between fast and stable channels. For more information, please see "Choosing the correct channel for your cluster" Newly installed clusters default to using stable channels. 4.1.3. eus-4.y channel In addition to the stable channel, all even-numbered minor versions of OpenShift Container Platform offer Extended Update Support (EUS). Releases promoted to the stable channel are also simultaneously promoted to the EUS channels. The primary purpose of the EUS channels is to serve as a convenience for clusters performing an EUS-to-EUS update. Note Both standard and non-EUS subscribers can access all EUS repositories and necessary RPMs ( rhel-*-eus-rpms ) to be able to support critical purposes such as debugging and building drivers. 4.1.4. candidate-4.10 channel The candidate-4.10 channel offers unsupported early access to releases as soon as they are built. Releases present only in candidate channels may not contain the full feature set of eventual GA releases or features may be removed prior to GA. Additionally, these releases have not been subject to full Red Hat Quality Assurance and may not offer update paths to later GA releases. Given these caveats, the candidate channel is only suitable for testing purposes where destroying and recreating a cluster is acceptable. 4.1.5. Update recommendations in the channel OpenShift Container Platform maintains an update recommendation service that knows your installed OpenShift Container Platform version and the path to take within the channel to get you to the release. Update paths are also limited to versions relevant to your currently selected channel and its promotion characteristics. You can imagine seeing the following releases in your channel: 4.10.0 4.10.1 4.10.3 4.10.4 The service recommends only updates that have been tested and have no known serious regressions. For example, if your cluster is on 4.10.1 and OpenShift Container Platform suggests 4.10.4, then it is recommended to update from 4.10.1 to 4.10.4. Important Do not rely on consecutive patch numbers. In this example, 4.10.2 is not and never was available in the channel, therefore updates to 4.10.2 are not recommended or supported. 4.1.6. Update recommendations and Conditional Updates Red Hat monitors newly released versions and update paths associated with those versions before and after they are added to supported channels. If Red Hat removes update recommendations from any supported release, a superseding update recommendation will be provided to a future version that corrects the regression. There may however be a delay while the defect is corrected, tested, and promoted to your selected channel. Beginning in OpenShift Container Platform 4.10, when update risks are confirmed, they are declared as Conditional Update risks for the relevant updates. Each known risk may apply to all clusters or only clusters matching certain conditions. Some examples include having the Platform set to None or the CNI provider set to OpenShiftSDN . The Cluster Version Operator (CVO) continually evaluates known risks against the current cluster state. If no risks match, the update is recommended. If the risk matches, those updates are supported but not recommended, and a reference link is provided. The reference link helps the cluster admin decide if they would like to accept the risk and update anyway. When Red Hat chooses to declare Conditional Update risks, that action is taken in all relevant channels simultaneously. Declaration of a Conditional Update risk may happen either before or after the update has been promoted to supported channels. 4.1.7. Choosing the correct channel for your cluster Choosing the appropriate channel involves two decisions. First, select the minor version you want for your cluster update. Selecting a channel which matches your current version ensures that you only apply z-stream updates and do not receive feature updates. Selecting an available channel which has a version greater than your current version will ensure that after one or more updates your cluster will have updated to that version. Your cluster will only be offered channels which match its current version, the version, or the EUS version. Note Due to the complexity involved in planning updates between versions many minors apart, channels that assist in planning updates beyond a single EUS-to-EUS update are not offered. Second, you should choose your desired rollout strategy. You may choose to update as soon as Red Hat declares a release GA by selecting from fast channels or you may want to wait for Red Hat to promote releases to the stable channel. Update recommendations offered in the fast-4.10 and stable-4.10 are both fully supported and benefit equally from ongoing data analysis. The promotion delay before promoting a release to the stable channel represents the only difference between the two channels. Updates to the latest z-streams are generally promoted to the stable channel within a week or two, however the delay when initially rolling out updates to the latest minor is much longer, generally 45-90 days. Please consider the promotion delay when choosing your desired channel, as waiting for promotion to the stable channel may affect your scheduling plans. Additionally, there are several factors which may lead an organization to move clusters to the fast channel either permanently or temporarily including: The desire to apply a specific fix known to affect your environment without delay. Application of CVE fixes without delay. CVE fixes may introduce regressions, so promotion delays still apply to z-streams with CVE fixes. Internal testing processes. If it takes your organization several weeks to qualify releases it is best test concurrently with our promotion process rather than waiting. This also assures that any telemetry signal provided to Red Hat is a factored into our rollout, so issues relevant to you can be fixed faster. 4.1.8. Restricted network clusters If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact updates. During an update, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings. 4.1.9. Switching between channels A channel can be switched from the web console or through the adm upgrade channel command: USD oc adm upgrade channel <channel> The web console will display an alert if you switch to a channel that does not include the current release. The web console does not recommend any updates while on a channel without the current release. You can return to the original channel at any point, however. Changing your channel might impact the supportability of your cluster. The following conditions might apply: Your cluster is still supported if you change from the stable-4.10 channel to the fast-4.10 channel. You can switch to the candidate-4.10 channel at any time, but some releases for this channel might be unsupported. You can switch from the candidate-4.10 channel to the fast-4.10 channel if your current release is a general availability release. You can always switch from the fast-4.10 channel to the stable-4.10 channel. There is a possible delay of up to a day for the release to be promoted to stable-4.10 if the current release was recently promoted. Additional resources Updating along a conditional upgrade path Choosing the correct channel for your cluster
[ "oc adm upgrade channel <channel>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/updating_clusters/understanding-upgrade-channels-releases
D.4. Searching an Internationalized Directory
D.4. Searching an Internationalized Directory When performing search operations, the Directory Server can sort the results based on any language for which the server has a supporting collation order. For a listing of the collation orders supported by the directory, see Section D.2, "Supported Locales" . Note An LDAPv3 search is required to perform internationalized searches. Therefore, do not set the LDAPv2 option on the call for ldapsearch . This section focuses using matching rule filters to return international attribute values. For more information on general ldapsearch syntax, see Section 14.3, "LDAP Search Filters" . Section D.4.1, "Matching Rule Formats" Section D.4.2, "Supported Search Types" Section D.4.3, "International Search Examples" D.4.1. Matching Rule Formats The matching rule filters for internationalized searches can be represented in any several ways, and which one should be used is a matter of preference: As the OID of the collation order for the locale on which to base the search. As the language tag associated with the collation order on which to base the search. As the OID of the collation order and a suffix that represents a relational operator. As the language tag associated with the collation order and a suffix that represents a relational operator. The syntax for each of these options is discussed in the following sections: Section D.4.1.1, "Using an OID for the Matching Rule" Section D.4.1.2, "Using a Language Tag for the Matching Rule" Section D.4.1.3, "Using an OID and Suffix for the Matching Rule" Section D.4.1.4, "Using a Language Tag and Suffix for the Matching Rule" D.4.1.1. Using an OID for the Matching Rule Each locale supported by the Directory Server has an associated collation order OID. For a list of OIDs supported by the Directory Server, see the /etc/dirsrv/config/slapd-collations.conf file. The collation order OID can be used in the matching rule portion of the matching rule filter as follows: The relational operator is included in the value portion of the string, separated from the value by a single space. For example, to search for all departmentNumber attributes that are at or after N4709 in the Swedish collation order, use the following filter: D.4.1.2. Using a Language Tag for the Matching Rule Each locale supported by the Directory Server has an associated language tag. For a list of language tags supported by the Directory Server, see the /etc/dirsrv/config/slapd-collations.conf file. The language tag can be used in the matching rule portion of the matching rule filter as follows: The relational operator is included in the value portion of the string, separated from the value by a single space. For example, to search the directory for all description attributes with a value of estudiante using the Spanish collation order, use the following filter: D.4.1.3. Using an OID and Suffix for the Matching Rule As an alternative to using a relational operator-value pair, append a suffix that represents a specific operator to the OID in the matching rule portion of the filter. Combine the OID and suffix as follows: Note This syntax is only supported by the mozldap utility and not by OpenLDAP utilities, such as ldapsearch . For example, to search for businessCategory attributes with the value softwareprodukte in the German collation order, use the following filter: The .3 in the example is the equality suffix. For a list of OIDs supported by the Directory Server, see the /etc/dirsrv/config/slapd-collations.conf file. For a list of relational operators and their equivalent suffixes, see Table D.2, "Search Types, Operators, and Suffixes" . D.4.1.4. Using a Language Tag and Suffix for the Matching Rule As an alternative to using a relational operator-value pair, append a suffix that represents a specific operator to the language tag in the matching rule portion of the filter. Combine the language tag and suffix as follows: Note This syntax is only supported by the mozldap utility and not by OpenLDAP utilities, such as ldapsearch . For example, to search for all surnames that come at or after La Salle in the French collation order, use the following filter: For a list of language tags supported by the Directory Server, see the /etc/dirsrv/config/slapd-collations.conf file. For a list of relational operators and their equivalent suffixes, see Table D.2, "Search Types, Operators, and Suffixes" . D.4.2. Supported Search Types The Directory Server supports the following types of international searches: equality (=) substring (*) greater-than (>) greater-than or equal-to (>=) less-than (<) less-than or equal-to (<=) Approximate, or phonetic, and presence searches are supported only in English. As with a regular ldapsearch search operation, an international search uses operators to define the type of search. However, when invoking an international search, either use the standard operators (=, >=, >, <, <=) in the value portion of the search string, or use a special type of operator, called a suffix (not to be confused with the directory suffix), in the matching rule portion of the filter. Table D.2, "Search Types, Operators, and Suffixes" summarizes each type of search, the operator, and the equivalent suffix. Table D.2. Search Types, Operators, and Suffixes Search Type Operator Suffix Less-than < .1 Less-than or equal-to <= .2 Equality = .3 Greater-than or equal-to >= .4 Greater-than > .5 Substring * .6 D.4.3. International Search Examples The following sections show examples of how to perform international searches on directory data. Each example gives all the possible matching rule filter formats so that you can become familiar with the formats and select the one that works best. D.4.3.1. Less-Than Example Performing a locale-specific search using the less-than operator (<), or suffix ( .1 ) searches for all attribute values that come before the given attribute in a specific collation order. For example, to search for all surnames that come before the surname Marquez in the Spanish collation order, any of the following matching rule filters would work: D.4.3.2. Less-Than or Equal-to Example Performing a locale-specific search using the less-than or equal-to operator (<=), or suffix ( .2 ) searches for all attribute values that come at or before the given attribute in a specific collation order. For example, to search for all room numbers that come at or before room number CZ422 in the Hungarian collation order, any of the following matching rule filters would work: D.4.3.3. Equality Example Performing a locale-specific search using the equal to operator (=), or suffix ( .3 ) searches for all attribute values that match the given attribute in a specific collation order. For example, to search for all businessCategory attributes with the value softwareprodukte in the German collation order, any of the following matching rule filters would work: D.4.3.4. Greater-Than or Equal-to Example Performing a locale-specific search using the greater-than or equal-to operator ( >= ), or suffix ( .4 ) searches for all attribute values that come at or after the given attribute in a specific collation order. For example, to search for all localities that come at or after Quebec in the French collation order, any of the following matching rule filters would work: D.4.3.5. Greater-Than Example Performing a locale-specific search using the greater-than operator (>), or suffix ( .5 ) searches for all attribute values that come at or before the given attribute in a specific collation order. For example, to search for all mail hosts that come after host schranka4 in the Czech collation order, any of the following matching rule filters would work: D.4.3.6. Substring Example Performing an international substring search searches for all values that match the given pattern in the specified collation order. For example, to search for all user IDs that end in ming in the Chinese collation order, any of the following matching rule filters would work: Substring search filters that use DN-valued attributes, such as modifiersName or memberOf , do not always match entries correctly if the filter contains one or more space characters. To work around this problem, use the entire DN in the filter instead of a substring, or ensure that the DN substring in the filter begins at an RDN boundary; that is, make sure it starts with the type = part of the DN. For example, this filter should not be used: But either one of these will work correctly:
[ "attr:OID :=( relational_operator value )", "departmentNumber:2.16.840.1.113730.3.3.2.46.1:=>= N4709", "attr:language-tag :=( relational_operator value )", "cn:es:== estudiante", "attr: OID+suffix := value", "businessCategory:2.16.840.1.113730.3.3.2.7.1.3:=softwareprodukte", "attr: language-tag+suffix := value", "sn:fr.4:=La Salle", "sn:2.16.840.1.113730.3.3.2.15.1:=< Marquez sn:es:=< Marquez sn:2.16.840.1.113730.3.3.2.15.1.1:=Marquez sn:es.1:=Marquez", "roomNumber:2.16.840.1.113730.3.3.2.23.1:=<= CZ422 roomNumber:hu:=<= CZ422 roomNumber:2.16.840.1.113730.3.3.2.23.1.2:=CZ422 roomNumber:hu.2:=CZ422", "businessCategory:2.16.840.1.113730.3.3.2.7.1:==softwareprodukte businessCategory:de:== softwareprodukte businessCategory:2.16.840.1.113730.3.3.2.7.1.3:=softwareprodukte businessCategory:de.3:=softwareprodukte", "locality:2.16.840.1.113730.3.3.2.18.1:=>= Quebec locality:fr:=>= Quebec locality:2.16.840.1.113730.3.3.2.18.1.4:=Quebec locality:fr.4:=Quebec", "mailHost:2.16.840.1.113730.3.3.2.5.1:=> schranka4 mailHost:cs:=> schranka4 mailHost:2.16.840.1.113730.3.3.2.5.1.5:=schranka4 mailHost:cs.5:=schranka4", "uid:2.16.840.1.113730.3.3.2.49.1:=* *ming uid:zh:=* *ming uid:2.16.840.1.113730.3.3.2.49.1.6:=* *ming .. uid:zh.6:=* *ming", "(memberOf=*Domain Administrators*)", "(memberOf=cn=Domain Administrators*) (memberOf=cn=Domain Administrators,ou=Groups,dc=example,dc=com)" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Searching_an_Internationalized_Directory
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_yaml_file/making-open-source-more-inclusive
Chapter 12. RHEL for Real Time scheduler
Chapter 12. RHEL for Real Time scheduler RHEL for Real Time uses the command line utilities help you to configure and monitor process configurations. 12.1. chrt utility for setting the scheduler The chrt utility checks and adjusts scheduler policies and priorities. It can start new processes with the desired properties, or change the current properties of a running process. The chrt utility takes the either --pid or the -p option to specify the process ID (PID). The chrt utility takes the following policy options: -f or --fifo : sets the schedule to SCHED_FIFO . -o or --other : sets the schedule to SCHED_OTHER . -r or --rr : sets schedule to SCHED_RR . -d or --deadline : sets schedule to SCHED_DEADLINE . The following example shows the attributes for a specified process. 12.2. Preemptive scheduling The real-time preemption is the mechanism to temporarily interrupt an executing task, with the intention of resuming it at a later time. It occurs when a higher priority process interrupts the CPU usage. Preemption can have a particularly negative impact on performance, and constant preemption can lead to a state known as thrashing. This problem occurs when processes are constantly preempted and no process ever gets to run completely. Changing the priority of a task can help reduce involuntary preemption. You can check for voluntary and involuntary preemption occurring on a single process by viewing the contents of the /proc/PID/status file, where PID is the process identifier. The following example shows the preemption status of a process with PID 1000. 12.3. Library functions for scheduler priority The real-time processes use a different set of library calls to control policy and priority. The functions require the inclusion of the sched.h header file. The symbols SCHED_OTHER , SCHED_RR and SCHED_FIFO must also be defined in the sched.h header file. The table lists the functions that set the policy and priority for the real-time scheduler. Table 12.1. Library functions for real-time scheduler Functions Description sched_getscheduler() Retrieves the scheduler policy for a specific process identifier (PID) sched_setscheduler() Sets the scheduler policy and other parameters. This function requires three parameters: sched_setscheduler(pid_t pid , int policy , const struct sched_param *sp); sched_getparam() Retrieves the scheduling parameters of a scheduling policy. sched_setparam() Sets the parameters associated with a scheduling policy that has been already set and can be verified using the sched_getparam() function. sched_get_priority_max() Returns the maximum valid priority associated with the scheduling policy. sched_get_priority_min() Returns the minimum valid priority associated with the scheduling policy . sched_rr_get_interval() Displays the allocated timeslice for each process.
[ "chrt -p 468 pid 468's current scheduling policy: SCHED_FIFO pid 468's current scheduling priority: 85", "grep voluntary /proc/1000/status voluntary_ctxt_switches: 194529 nonvoluntary_ctxt_switches: 195338" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/understanding_rhel_for_real_time/assembly_rhel-for-real-time-scheduler_understanding-rhel-for-real-time-core-concepts
4.7. Set the Transaction Wrapping Mode
4.7. Set the Transaction Wrapping Mode You can set the transaction mode as a property when you establish the connection using: the autoCommitTxn property in the connection URL (see Section 1.9, "Connection Properties for the Driver and Data Source Classes" ), the setAutoCommitTxn method (see Section 1.9, "Connection Properties for the Driver and Data Source Classes" ), or on a per-query basis, using the SET statement with the PROP_TXN_AUTO_WRAP property (see Section 3.5, "Execution Properties" ).
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/set_the_transaction_wrapping_mode1
9.9. Securing Server Connections
9.9. Securing Server Connections After designing the authentication scheme for identified users and the access control scheme for protecting information in the directory, the step is to design a way to protect the integrity of the information as it passes between servers and client applications. For both server to client connections and server to server connections, the Directory Server supports a variety of secure connection types: Transport Layer Security (TLS) . To provide secure communications over the network, the Directory Server can use LDAP over the Transport Layer Security (TLS). TLS can be used in conjunction with encryption algorithms from RSA. The encryption method selected for a particular connection is the result of a negotiation between the client application and Directory Server. Start TLS . Directory Server also supports Start TLS, a method of initiating a Transport Layer Security (TLS) connection over a regular, unencrypted LDAP port. Simple Authentication and Security Layer (SASL) . SASL is a security framework, meaning that it sets up a system that allows different mechanisms to authenticate a user to the server, depending on what mechanism is enabled in both client and server applications. It can also establish an encrypted session between the client and a server. In Directory Server, SASL is used with GSS-API to enable Kerberos logins and can be used for almost all server to server connections, including replication, chaining, and pass-through authentication. (SASL cannot be used with Windows Sync.) Secure connections are recommended for any operations which handle sensitive information, like replication, and are required for some operations, like Windows password synchronization. Directory Server can support TLS connections, SASL, and non-secure connections simultaneously. Both SASL authentication and TLS connections can be configured at the same time. For example, the Directory Server instance can be configured to require TLS connections to the server and also support SASL authentication for replication connections. This means it is not necessary to choose whether to use TLS or SASL in a network environment; you can use both. It is also possible to set a minimum level of security for connections to the server. The security strength factor measures, in key strength, how strong a secure connection is. An ACI can be set that requires certain operations (like password changes) only occur if the connection is of a certain strength or higher. It is also possible to set a minimum SSF, which can essentially disable standard connections and requires TLS, Start TLS, or SASL for every connection. The Directory Server supports TLS and SASL simultaneously, and the server calculates the SSF of all available connection types and selects the strongest. For more information about using TLS, Start TLS, and SASL, check out the Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_a_secure_directory-securing_connections_with_tls_and_start_tls
Chapter 8. Important links
Chapter 8. Important links Red Hat AMQ Broker 7.8 Release Notes Red Hat AMQ Broker 7.7 Release Notes Red Hat AMQ Broker 7.6 Release Notes Red Hat AMQ Broker 7.1 to 7.5 Release Notes (aggregated) Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2022-07-07 11:41:10 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_red_hat_amq_broker_7.9/links
function::ansi_cursor_restore
function::ansi_cursor_restore Name function::ansi_cursor_restore - Restores a previously saved cursor position. Synopsis Arguments None General Syntax ansi_cursor_restore Description Sends ansi code for restoring the current cursor position previously saved with ansi_cursor_save .
[ "function ansi_cursor_restore()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ansi-cursor-restore
5.2. Resizing an Online Multipath Device
5.2. Resizing an Online Multipath Device If you need to resize an online multipath device, use the following procedure. Resize your physical device. Execute the following command to find the paths to the LUN: Resize your paths. For SCSI devices, writing a 1 to the rescan file for the device causes the SCSI driver to rescan, as in the following command: Ensure that you run this command for each of the path devices. For example, if your path devices are sda , sdb , sde , and sdf , you would run the following commands: Resize your multipath device by executing the multipathd resize command: Resize the file system (assuming no LVM or DOS partitions are used):
[ "multipath -l", "echo 1 > /sys/block/ path_device /device/rescan", "echo 1 > /sys/block/sda/device/rescan echo 1 > /sys/block/sdb/device/rescan echo 1 > /sys/block/sde/device/rescan echo 1 > /sys/block/sdf/device/rescan", "multipathd resize map multipath_device", "resize2fs /dev/mapper/mpatha" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/online_device_resize
Chapter 154. YAML DSL
Chapter 154. YAML DSL Since Camel 3.9 The YAML DSL provides the capability to define your Camel routes, route templates & REST DSL configuration in YAML. 154.1. Defining a route A route is a collection of elements defined as follows: - from: 1 uri: "direct:start" steps: 2 - filter: expression: simple: "USD{in.header.continue} == true" steps: - to: uri: "log:filtered" - to: uri: "log:original" Where, 1 Route entry point, by default from and rest are supported. 2 Processing steps Note Each step represents a YAML map that has a single entry where the field name is the EIP name. As a general rule, each step provides all the parameters the related definition declares, but there are some minor differences/enhancements: Output Aware Steps Some steps, such as filter and split , have their own pipeline when an exchange matches the filter expression or for the items generated by the split expression. You can define these pipelines in the steps field: filter: expression: simple: "USD{in.header.continue} == true" steps: - to: uri: "log:filtered" Expression Aware Steps Some EIP, such as filter and split , supports the definition of an expression through the expression field: Explicit Expression field filter: expression: simple: "USD{in.header.continue} == true" To make the DSL less verbose, you can omit the expression field. Implicit Expression field filter: simple: "USD{in.header.continue} == true" In general, expressions can be defined inline, such as within the examples above but if you need provide more information, you can 'unroll' the expression definition and configure any single parameter the expression defines. Full Expression definition filter: tokenize: token: "<" end-token: ">" Data Format Aware Steps The EIP marshal and unmarshal supports the definition of data formats: marshal: json: library: Gson Note In case you want to use the data-format's default settings, you need to place an empty block as data format parameters, like json: {} 154.2. Defining endpoints To define an endpoint with the YAML DSL you have two options: Using a classic Camel URI: - from: uri: "timer:tick?period=1s" steps: - to: uri: "telegram:bots?authorizationToken=XXX" Using URI and parameters: - from: uri: "timer://tick" parameters: period: "1s" steps: - to: uri: "telegram:bots" parameters: authorizationToken: "XXX" 154.3. Defining beans In addition to the general support for creating beans provided by Camel Main , the YAML DSL provide a convenient syntax to define and configure them: - beans: - name: beanFromMap 1 type: com.acme.MyBean 2 properties: 3 foo: bar Where, 1 The name of the bean which will bound the instance to the Camel Registry. 2 The full qualified class name of the bean 3 The properties of the bean to be set The properties of the bean can be defined using either a map or properties style, as shown in the example below: - beans: # map style - name: beanFromMap type: com.acme.MyBean properties: field1: 'f1' field2: 'f2' nested: field1: 'nf1' field2: 'nf2' # properties style - name: beanFromProps type: com.acme.MyBean properties: field1: 'f1_p' field2: 'f2_p' nested.field1: 'nf1_p' nested.field2: 'nf2_p' Note The beans elements is only used as root element. 154.4. Configuring Options Camel components are configured on two separate levels: component level endpoint level 154.4.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 154.4.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 154.5. Configuring Options on languages Some languages have additional configurations that you may need to use. For example, the JSONPath can be configured to ignore JSON parsing errors. This is intended when you use a Content Based Router and want to route the message to different endpoints. The JSON payload of the message can be in different forms, meaning that the JSonPath expressions in some cases would fail with an exception, and other times not. In this situation you must set suppress-exception to true, as shown below: - from: uri: "direct:start" steps: - choice: when: - jsonpath: expression: "person.middlename" suppress-exceptions: true steps: - to: "mock:middle" - jsonpath: expression: "person.lastname" suppress-exceptions: true steps: - to: "mock:last" otherwise: steps: - to: "mock:other" In the route above, the following message would have failed the JSonPath expression person.middlename because the JSON payload does not have a middlename field. To remedy this, we have suppressed the exception. { "person": { "firstname": "John", "lastname": "Doe" } } 154.6. External examples You can find a set of examples using main-yaml in Camel examples that demonstrate how to create the Camel Routes with YAML. You can also refer to Camel Kamelets where each Kamelet is defined using YAML.
[ "- from: 1 uri: \"direct:start\" steps: 2 - filter: expression: simple: \"USD{in.header.continue} == true\" steps: - to: uri: \"log:filtered\" - to: uri: \"log:original\"", "filter: expression: simple: \"USD{in.header.continue} == true\" steps: - to: uri: \"log:filtered\"", "filter: expression: simple: \"USD{in.header.continue} == true\"", "filter: simple: \"USD{in.header.continue} == true\"", "filter: tokenize: token: \"<\" end-token: \">\"", "marshal: json: library: Gson", "- from: uri: \"timer:tick?period=1s\" steps: - to: uri: \"telegram:bots?authorizationToken=XXX\"", "- from: uri: \"timer://tick\" parameters: period: \"1s\" steps: - to: uri: \"telegram:bots\" parameters: authorizationToken: \"XXX\"", "- beans: - name: beanFromMap 1 type: com.acme.MyBean 2 properties: 3 foo: bar", "- beans: # map style - name: beanFromMap type: com.acme.MyBean properties: field1: 'f1' field2: 'f2' nested: field1: 'nf1' field2: 'nf2' # properties style - name: beanFromProps type: com.acme.MyBean properties: field1: 'f1_p' field2: 'f2_p' nested.field1: 'nf1_p' nested.field2: 'nf2_p'", "- from: uri: \"direct:start\" steps: - choice: when: - jsonpath: expression: \"person.middlename\" suppress-exceptions: true steps: - to: \"mock:middle\" - jsonpath: expression: \"person.lastname\" suppress-exceptions: true steps: - to: \"mock:last\" otherwise: steps: - to: \"mock:other\"", "{ \"person\": { \"firstname\": \"John\", \"lastname\": \"Doe\" } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-yaml-dsl-component-starter
Chapter 5. Controlling access to smart cards by using polkit
Chapter 5. Controlling access to smart cards by using polkit To cover possible threats that cannot be prevented by mechanisms built into smart cards, such as PINs, PIN pads, and biometrics, and for more fine-grained control, RHEL uses the polkit framework for controlling access control to smart cards. System administrators can configure polkit to fit specific scenarios, such as smart-card access for non-privileged or non-local users or services. 5.1. Smart-card access control through polkit The Personal Computer/Smart Card (PC/SC) protocol specifies a standard for integrating smart cards and their readers into computing systems. In RHEL, the pcsc-lite package provides middleware to access smart cards that use the PC/SC API. A part of this package, the pcscd (PC/SC Smart Card) daemon, ensures that the system can access a smart card using the PC/SC protocol. Because access-control mechanisms built into smart cards, such as PINs, PIN pads, and biometrics, do not cover all possible threats, RHEL uses the polkit framework for more robust access control. The polkit authorization manager can grant access to privileged operations. In addition to granting access to disks, you can use polkit also to specify policies for securing smart cards. For example, you can define which users can perform which operations with a smart card. After installing the pcsc-lite package and starting the pcscd daemon, the system enforces policies defined in the /usr/share/polkit-1/actions/ directory. The default system-wide policy is in the /usr/share/polkit-1/actions/org.debian.pcsc-lite.policy file. Polkit policy files use the XML format and the syntax is described in the polkit(8) man page on your system. The polkitd service monitors the /etc/polkit-1/rules.d/ and /usr/share/polkit-1/rules.d/ directories for any changes in rule files stored in these directories. The files contain authorization rules in JavaScript format. System administrators can add custom rule files in both directories, and polkitd reads them in lexical order based on their file name. If two files have the same names, then the file in /etc/polkit-1/rules.d/ is read first. If you need to enable smart-card support when the system security services daemon (SSSD) does not run as root , you must install the sssd-polkit-rules package. The package provides polkit integration with SSSD. Additional resources polkit(8) , polkitd(8) , and pcscd(8) man pages on your system 5.2. Troubleshooting problems related to PC/SC and polkit Polkit policies that are automatically enforced after you install the pcsc-lite package and start the pcscd daemon may ask for authentication in the user's session even if the user does not directly interact with a smart card. In GNOME, you can see the following error message: Note that the system can install the pcsc-lite package as a dependency when you install other packages related to smart cards such as opensc . If your scenario does not require any interaction with smart cards and you want to prevent displaying authorization requests for the PC/SC daemon, you can remove the pcsc-lite package. Keeping the minimum of necessary packages is a good security practice anyway. If you use smart cards, start troubleshooting by checking the rules in the system-provided policy file at /usr/share/polkit-1/actions/org.debian.pcsc-lite.policy . You can add your custom rule files to the policy in the /etc/polkit-1/rules.d/ directory, for example, 03-allow-pcscd.rules . Note that the rule files use the JavaScript syntax, the policy file is in the XML format. To understand what authorization requests the system displays, check the Journal log, for example: The log entry means that the user is not authorized to perform an action by the policy. You can solve this denial by adding a corresponding rule to /etc/polkit-1/rules.d/ . You can search also for log entries related to the polkitd unit, for example: In the output, the first entry means that the rule file contains some syntax error. The second entry means that the user failed to gain the access to pcscd . You can also list all applications that use the PC/SC protocol by a short script. Create an executable file, for example, pcsc-apps.sh , and insert the following code: Run the script as root : Additional resources journalctl , polkit(8) , polkitd(8) , and pcscd(8) man pages. 5.3. Displaying more detailed information about polkit authorization to PC/SC In the default configuration, the polkit authorization framework sends only limited information to the Journal log. You can extend polkit log entries related to the PC/SC protocol by adding new rules. Prerequisites You have installed the pcsc-lite package on your system. The pcscd daemon is running. Procedure Create a new file in the /etc/polkit-1/rules.d/ directory: Edit the file in an editor of your choice, for example: Insert the following lines: Save the file, and exit the editor. Restart the pcscd and polkit services: Verification Make an authorization request for pcscd . For example, open the Firefox web browser or use the pkcs11-tool -L command provided by the opensc package. Display the extended log entries, for example: Additional resources polkit(8) and polkitd(8) man pages. 5.4. Additional resources Controlling access to smart cards Red Hat Blog article.
[ "Authentication is required to access the PC/SC daemon", "journalctl -b | grep pcsc Process 3087 (user: 1001) is NOT authorized for action: access_pcsc", "journalctl -u polkit polkitd[NNN]: Error compiling script /etc/polkit-1/rules.d/00-debug-pcscd.rules polkitd[NNN]: Operator of unix-session:c2 FAILED to authenticate to gain authorization for action org.debian.pcsc-lite.access_pcsc for unix-process:4800:14441 [/usr/libexec/gsd-smartcard] (owned by unix-user:group)", "#!/bin/bash cd /proc for p in [0-9]* do if grep libpcsclite.so.1.0.0 USDp/maps &> /dev/null then echo -n \"process: \" cat USDp/cmdline echo \" (USDp)\" fi done", "./pcsc-apps.sh process: /usr/libexec/gsd-smartcard (3048) enable-sync --auto-ssl-client-auth --enable-crashpad (4828)", "touch /etc/polkit-1/rules.d/00-test.rules", "vi /etc/polkit-1/rules.d/00-test.rules", "polkit.addRule(function(action, subject) { if (action.id == \"org.debian.pcsc-lite.access_pcsc\" || action.id == \"org.debian.pcsc-lite.access_card\") { polkit.log(\"action=\" + action); polkit.log(\"subject=\" + subject); } });", "systemctl restart pcscd.service pcscd.socket polkit.service", "journalctl -u polkit --since \"1 hour ago\" polkitd[1224]: <no filename>:4: action=[Action id='org.debian.pcsc-lite.access_pcsc'] polkitd[1224]: <no filename>:5: subject=[Subject pid=2020481 user=user' groups=user,wheel,mock,wireshark seat=null session=null local=true active=true]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/assembly_controlling-access-to-smart-cards-using-polkit_security-hardening
5.3. Model Extension Definition Registry (MED Registry)
5.3. Model Extension Definition Registry (MED Registry) A MED registry keeps track of all the MEDs that are registered in a workspace. Only registered MEDs can be used to extend a model. There are three different types of MEDs stored in the registry: Built-In MED - these are registered during Teiid Designer installation. These MEDs cannot be updated or unregistered by the user. Local MED - these are created or imported by the user. These MEDs can be updated, registered, and unregistered by the user. Imported MED - these are imported automatically when the server starts for the first time. Any functions that depend on these MEDs are available after the server starts for the first time. The MED Registry state is persisted and is restored each time a new session is started.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/model_extension_definition_registry_med_registry
22.3.2. Mounting the Share
22.3.2. Mounting the Share Sometimes it is useful to mount a Samba share to a directory so that the files in the directory can be treated as if they are part of the local file system. To mount a Samba share to a directory, create the directory if it does not already exist, and execute the following command as root: This command mounts <sharename> from <servername> in the local directory /mnt/point/ .
[ "mount -t smbfs -o username= <username> // <servername> / <sharename> /mnt/point/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/connecting_to_a_samba_share-mounting_the_share
Chapter 6. Updating RHEL 8 content
Chapter 6. Updating RHEL 8 content With YUM , you can check if your system has any pending updates. You can list packages that need updating and choose to update a single package, multiple packages, or all packages at once. If any of the packages you choose to update have dependencies, these dependencies are updated as well. 6.1. Checking for updates To identify which packages installed on your system have available updates, you can list them. Procedure Check the available updates for installed packages: The output returns the list of packages and their dependencies that have an update available. 6.2. Updating packages You can use YUM to update a single package or all packages and their dependencies at once. Important When applying updates to kernel, YUM always installs a new kernel regardless of whether you are using the yum update or yum install command. Note that this only applies to packages identified by using the installonlypkgs YUM configuration option. Such packages include, for example, the kernel , kernel-core , and kernel-modules packages. Depending on your scenario, use one of the following options to apply updates: To update all packages and their dependencies, enter: To update a single package, enter: Important If you upgraded the GRUB boot loader packages on a BIOS or IBM Power system, reinstall GRUB. See Reinstalling GRUB . 6.3. Updating package groups Package groups bundle multiple packages, and you can use package groups to update all packages assigned to a group in a single step. Procedure Update packages from a specific package group: Important If you upgraded the GRUB boot loader packages on a BIOS or IBM Power system, reinstall GRUB. See Reinstalling GRUB . 6.4. Updating security-related packages You can use YUM to update packages that have security errata. Procedure Depending on your scenario, use one of the following options to apply updates: To upgrade to the latest available packages that have security errata, enter: To upgrade to the last security errata packages, enter: Important If you upgraded the GRUB boot loader packages on a BIOS or IBM Power system, reinstall GRUB. See Reinstalling GRUB . Additional resources Managing and monitoring security updates
[ "yum check-update", "yum update", "yum update <package_name>", "yum group update <group_name>", "yum update --security", "yum update-minimal --security" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_managing_and_removing_user-space_components/updating-software-packages_using-appstream
Chapter 5. Control plane backup and restore
Chapter 5. Control plane backup and restore 5.1. Backing up etcd etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. Back up your cluster's etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost. Be sure to take an etcd backup before you update your cluster. Taking a backup before you update is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.17.5 cluster must use an etcd backup that was taken from 4.17.5. Important Back up your cluster's etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host. After you have an etcd backup, you can restore to a cluster state . 5.1.1. Backing up etcd data Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd. Important Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have checked whether the cluster-wide proxy is enabled. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Procedure Start a debug session as root for a control plane node: USD oc debug --as-root node/<node_name> Change your root directory to /host in the debug shell: sh-4.4# chroot /host If the cluster-wide proxy is enabled, export the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables by running the following commands: USD export HTTP_PROXY=http://<your_proxy.example.com>:8080 USD export HTTPS_PROXY=https://<your_proxy.example.com>:8080 USD export NO_PROXY=<example.com> Run the cluster-backup.sh script in the debug shell and pass in the location to save the backup to. Tip The cluster-backup.sh script is maintained as a component of the etcd Cluster Operator and is a wrapper around the etcdctl snapshot save command. sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup Example script output found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {"level":"info","ts":1624647639.0188997,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part"} {"level":"info","ts":"2021-06-25T19:00:39.030Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"} {"level":"info","ts":1624647639.0301006,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://10.0.0.5:2379"} {"level":"info","ts":"2021-06-25T19:00:40.215Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"} {"level":"info","ts":1624647640.6032252,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://10.0.0.5:2379","size":"114 MB","took":1.584090459} {"level":"info","ts":1624647640.6047094,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/home/core/assets/backup/snapshot_2021-06-25_190035.db"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {"hash":3866667823,"revision":31407,"totalKey":12828,"totalSize":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup In this example, two files are created in the /home/core/assets/backup/ directory on the control plane host: snapshot_<datetimestamp>.db : This file is the etcd snapshot. The cluster-backup.sh script confirms its validity. static_kuberesources_<datetimestamp>.tar.gz : This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Note If etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot. Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted. 5.1.2. Additional resources Recovering an unhealthy etcd cluster 5.1.3. Creating automated etcd backups The automated backup feature for etcd supports both recurring and single backups. Recurring backups create a cron job that starts a single backup each time the job triggers. Important Automating etcd backups is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Follow these steps to enable automated backups for etcd. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster prevents minor version updates. The TechPreviewNoUpgrade feature set cannot be disabled. Do not enable this feature set on production clusters. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure Create a FeatureGate custom resource (CR) file named enable-tech-preview-no-upgrade.yaml with the following contents: apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade Apply the CR and enable automated backups: USD oc apply -f enable-tech-preview-no-upgrade.yaml It takes time to enable the related APIs. Verify the creation of the custom resource definition (CRD) by running the following command: USD oc get crd | grep backup Example output backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z 5.1.3.1. Creating a single etcd backup Follow these steps to create a single etcd backup by creating and applying a custom resource (CR). Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure If dynamically-provisioned storage is available, complete the following steps to create a single automated etcd backup: Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Verify the creation of the PVC by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s Note Dynamic PVCs stay in the Pending state until they are mounted. Create a CR file named etcd-single-backup.yaml with contents such as the following example: apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1 1 The name of the PVC to save the backup to. Adjust this value according to your environment. Apply the CR to start a single backup: USD oc apply -f etcd-single-backup.yaml If dynamically-provisioned storage is not available, complete the following steps to create a single automated etcd backup: Create a StorageClass CR file named etcd-backup-local-storage.yaml with the following contents: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate Apply the StorageClass CR by running the following command: USD oc apply -f etcd-backup-local-storage.yaml Create a PV named etcd-backup-pv-fs.yaml with contents such as the following example: apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2 1 The amount of storage available to the PV. Adjust this value for your requirements. 2 Replace this value with the node to attach this PV to. Verify the creation of the PV by running the following command: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s Create a PVC named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Create a CR file named etcd-single-backup.yaml with contents such as the following example: apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1 1 The name of the persistent volume claim (PVC) to save the backup to. Adjust this value according to your environment. Apply the CR to start a single backup: USD oc apply -f etcd-single-backup.yaml 5.1.3.2. Creating recurring etcd backups Follow these steps to create automated recurring backups of etcd. Use dynamically-provisioned storage to keep the created etcd backup data in a safe, external location if possible. If dynamically-provisioned storage is not available, consider storing the backup data on an NFS share to make backup recovery more accessible. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift CLI ( oc ). Procedure If dynamically-provisioned storage is available, complete the following steps to create automated recurring backups: Create a persistent volume claim (PVC) named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage 1 The amount of storage available to the PVC. Adjust this value for your requirements. Note Each of the following providers require changes to the accessModes and storageClassName keys: Provider accessModes value storageClassName value AWS with the versioned-installer-efc_operator-ci profile - ReadWriteMany efs-sc Google Cloud Platform - ReadWriteMany filestore-csi Microsoft Azure - ReadWriteMany azurefile-csi Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Verify the creation of the PVC by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s Note Dynamic PVCs stay in the Pending state until they are mounted. If dynamically-provisioned storage is unavailable, create a local storage PVC by completing the following steps: Warning If you delete or otherwise lose access to the node that contains the stored backup data, you can lose data. Create a StorageClass CR file named etcd-backup-local-storage.yaml with the following contents: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate Apply the StorageClass CR by running the following command: USD oc apply -f etcd-backup-local-storage.yaml Create a PV named etcd-backup-pv-fs.yaml from the applied StorageClass with contents such as the following example: apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2 1 The amount of storage available to the PV. Adjust this value for your requirements. 2 Replace this value with the master node to attach this PV to. Tip Run the following command to list the available nodes: USD oc get nodes Verify the creation of the PV by running the following command: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s Create a PVC named etcd-backup-pvc.yaml with contents such as the following example: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage 1 The amount of storage available to the PVC. Adjust this value for your requirements. Apply the PVC by running the following command: USD oc apply -f etcd-backup-pvc.yaml Create a custom resource definition (CRD) file named etcd-recurring-backups.yaml . The contents of the created CRD define the schedule and retention type of automated backups. For the default retention type of RetentionNumber with 15 retained backups, use contents such as the following example: apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: "20 4 * * *" 1 timeZone: "UTC" pvcName: etcd-backup-pvc 1 The CronTab schedule for recurring backups. Adjust this value for your needs. To use retention based on the maximum number of backups, add the following key-value pairs to the etcd key: spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2 1 The retention type. Defaults to RetentionNumber if unspecified. 2 The maximum number of backups to retain. Adjust this value for your needs. Defaults to 15 backups if unspecified. Warning A known issue causes the number of retained backups to be one greater than the configured value. For retention based on the file size of backups, use the following: spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1 1 The maximum file size of the retained backups in gigabytes. Adjust this value for your needs. Defaults to 10 GB if unspecified. Warning A known issue causes the maximum size of retained backups to be up to 10 GB greater than the configured value. Create the cron job defined by the CRD by running the following command: USD oc create -f etcd-recurring-backup.yaml To find the created cron job, run the following command: USD oc get cronjob -n openshift-etcd 5.2. Replacing an unhealthy etcd member This document describes the process to replace a single unhealthy etcd member. This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping. Note If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to restore to a cluster state instead of this procedure. If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure. If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member. 5.2.1. Prerequisites Take an etcd backup prior to replacing an unhealthy etcd member. 5.2.2. Identifying an unhealthy etcd member You can identify if your cluster has an unhealthy etcd member. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Check the status of the EtcdMembersAvailable status condition using the following command: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}' Review the output: 2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy This example output shows that the ip-10-0-131-183.ec2.internal etcd member is unhealthy. 5.2.3. Determining the state of the unhealthy etcd member The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in: The machine is not running or the node is not ready The etcd pod is crashlooping This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member. Note If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have identified an unhealthy etcd member. Procedure Determine if the machine is not running : USD oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running Example output ip-10-0-131-183.ec2.internal stopped 1 1 This output lists the node and the status of the node's machine. If the status is anything other than running , then the machine is not running . If the machine is not running , then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure. Determine if the node is not ready . If either of the following scenarios are true, then the node is not ready . If the machine is running, then check whether the node is unreachable: USD oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable Example output ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1 1 If the node is listed with an unreachable taint, then the node is not ready . If the node is still reachable, then check whether the node is listed as NotReady : USD oc get nodes -l node-role.kubernetes.io/master | grep "NotReady" Example output ip-10-0-131-183.ec2.internal NotReady master 122m v1.30.3 1 1 If the node is listed as NotReady , then the node is not ready . If the node is not ready , then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure. Determine if the etcd pod is crashlooping . If the machine is running and the node is ready, then check whether the etcd pod is crashlooping. Verify that all control plane nodes are listed as Ready : USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.30.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.30.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.30.3 Check whether the status of an etcd pod is either Error or CrashloopBackoff : USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m 1 Since this status of this pod is Error , then the etcd pod is crashlooping . If the etcd pod is crashlooping , then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure. 5.2.4. Replacing the unhealthy etcd member Depending on the state of your unhealthy etcd member, use one of the following procedures: Replacing an unhealthy etcd member whose machine is not running or whose node is not ready Installing a primary control plane node on an unhealthy cluster Replacing an unhealthy etcd member whose etcd pod is crashlooping Replacing an unhealthy stopped baremetal etcd member 5.2.4.1. Replacing an unhealthy etcd member whose machine is not running or whose node is not ready This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready. Note If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Prerequisites You have identified the unhealthy etcd member. You have verified that either the machine is not running or the node is not ready. Important You must wait if you power off other control plane nodes. The control plane nodes must remain powered off until the replacement of an unhealthy etcd member is complete. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure, so that you can restore your cluster if you experience any issues. Procedure Remove the unhealthy member. Choose a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m Connect to the running etcd container, passing in the name of a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. The USD etcdctl endpoint health command will list the removed member until the procedure of replacement is finished and a new member is added. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: sh-4.2# etcdctl member remove 6fc1e7c9db35841d Example output Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346 View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ You can now exit the node shell. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Important After you turn off the quorum guard, the cluster might be unreachable for a short time while the remaining etcd instances reboot to reflect the configuration change. Note etcd cannot tolerate any additional member failure when running with two members. Restarting either remaining member breaks the quorum and causes downtime in your cluster. The quorum guard protects etcd from restarts due to configuration changes that could cause downtime, so it must be disabled to complete this procedure. Delete the affected node by running the following command: USD oc delete node <node_name> Example command USD oc delete node ip-10-0-131-183.ec2.internal Remove the old secrets for the unhealthy etcd member that was removed. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1 1 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: Example output etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal Delete the serving secret: USD oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal Delete the metrics secret: USD oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal Delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master by using the same method that was used to originally create it. Obtain the machine for the unhealthy member. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the unhealthy node, ip-10-0-131-183.ec2.internal . Delete the machine of the unhealthy member: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the unhealthy node. A new machine is automatically provisioned after deleting the machine of the unhealthy member. Verify that a new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready once the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Note Verify the subnet IDs that you are using for your machine sets to ensure that they end up in the correct availability zone. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might experience the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that all etcd pods are running properly. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m If the output from the command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. Verify that there are exactly three etcd members. Connect to the running etcd container, passing in the name of a pod that was not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ If the output from the command lists more than three etcd members, you must carefully remove the unwanted member. Warning Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. Additional resources Recovering a degraded etcd Operator Installing a primary control plane node on an unhealthy cluster 5.2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping. Prerequisites You have identified the unhealthy etcd member. You have verified that the etcd pod is crashlooping. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Stop the crashlooping etcd pod. Debug the node that is crashlooping. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc debug node/ip-10-0-131-183.ec2.internal 1 1 Replace this with the name of the unhealthy node. Change your root directory to /host : sh-4.2# chroot /host Move the existing etcd pod file out of the kubelet manifest directory: sh-4.2# mkdir /var/lib/etcd-backup sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/ Move the etcd data directory to a different location: sh-4.2# mv /var/lib/etcd/ /tmp You can now exit the node shell. Remove the unhealthy member. Choose a pod that is not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m Connect to the running etcd container, passing in the name of a pod that is not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: sh-4.2# etcdctl member remove 62bcf33650a7170a Example output Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346 View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+ You can now exit the node shell. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Remove the old secrets for the unhealthy etcd member that was removed. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1 1 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: Example output etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal Delete the serving secret: USD oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal Delete the metrics secret: USD oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal Force etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes have a functioning etcd pod. Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that the new member is available and healthy. Connect to the running etcd container again. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal Verify that all members are healthy: sh-4.2# etcdctl endpoint health Example output https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms 5.2.4.3. Replacing an unhealthy bare metal etcd member whose machine is not running or whose node is not ready This procedure details the steps to replace a bare metal etcd member that is unhealthy either because the machine is not running or because the node is not ready. If you are running installer-provisioned infrastructure or you used the Machine API to create your machines, follow these steps. Otherwise you must create the new control plane node using the same method that was used to originally create it. Prerequisites You have identified the unhealthy bare metal etcd member. You have verified that either the machine is not running or the node is not ready. You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Important You must take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues. Procedure Verify and remove the unhealthy member. Choose a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none> Connect to the running etcd container, passing in the name of a pod that is not on the affected node: In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-openshift-control-plane-0 View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ Take note of the ID and the name of the unhealthy etcd member, because these values are required later in the procedure. The etcdctl endpoint health command will list the removed member until the replacement procedure is completed and the new member is added. Remove the unhealthy etcd member by providing the ID to the etcdctl member remove command: Warning Be sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss. sh-4.2# etcdctl member remove 7a8197040a5126c8 Example output Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b View the member list again and verify that the member was removed: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ You can now exit the node shell. Important After you remove the member, the cluster might be unreachable for a short time while the remaining etcd instances reboot. Turn off the quorum guard by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. Remove the old secrets for the unhealthy etcd member that was removed by running the following commands. List the secrets for the unhealthy etcd member that was removed. USD oc get secrets -n openshift-etcd | grep openshift-control-plane-2 Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure. There is a peer, serving, and metrics secret as shown in the following output: etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m Delete the secrets for the unhealthy etcd member that was removed. Delete the peer secret: USD oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret "etcd-peer-openshift-control-plane-2" deleted Delete the serving secret: USD oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-metrics-openshift-control-plane-2" deleted Delete the metrics secret: USD oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-openshift-control-plane-2" deleted Obtain the machine for the unhealthy member. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned 1 This is the control plane machine for the unhealthy node, examplecluster-control-plane-2 . Ensure that the Bare Metal Operator is available by running the following command: USD oc get clusteroperator baremetal Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.17.0 True False False 3d15h Remove the old BareMetalHost object by running the following command: USD oc delete bmh openshift-control-plane-2 -n openshift-machine-api Example output baremetalhost.metal3.io "openshift-control-plane-2" deleted Delete the machine of the unhealthy member by running the following command: USD oc delete machine -n openshift-machine-api examplecluster-control-plane-2 After you remove the BareMetalHost and Machine objects, then the Machine controller automatically deletes the Node object. If deletion of the machine is delayed for any reason or the command is obstructed and delayed, you can force deletion by removing the machine object finalizer field. Important Do not interrupt machine deletion by pressing Ctrl+c . You must allow the command to proceed to completion. Open a new terminal window to edit and delete the finalizer fields. A new machine is automatically provisioned after deleting the machine of the unhealthy member. Edit the machine configuration by running the following command: USD oc edit machine -n openshift-machine-api examplecluster-control-plane-2 Delete the following fields in the Machine custom resource, and then save the updated file: finalizers: - machine.machine.openshift.io Example output machine.machine.openshift.io/examplecluster-control-plane-2 edited Verify that the machine was deleted by running the following command: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned Verify that the node has been deleted by running the following command: USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.30.3 openshift-control-plane-1 Ready master 3h24m v1.30.3 openshift-compute-0 Ready worker 176m v1.30.3 openshift-compute-1 Ready worker 176m v1.30.3 Create the new BareMetalHost object and the secret to store the BMC credentials: USD cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF Note The username and password can be found from the other bare metal host's secrets. The protocol to use in bmc:address can be taken from other bmh objects. Important If you reuse the BareMetalHost object definition from an existing control plane host, do not leave the externallyProvisioned field set to true . Existing control plane BareMetalHost objects may have the externallyProvisioned flag set to true if they were provisioned by the OpenShift Container Platform installation program. After the inspection is complete, the BareMetalHost object is created and available to be provisioned. Verify the creation process using available BareMetalHost objects: USD oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m Verify that a new machine has been created: USD oc get machines -n openshift-machine-api -o wide Example output NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It should take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Verify that the bare metal host becomes provisioned and no error reported by running the following command: USD oc get bmh -n openshift-machine-api Example output USD oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m Verify that the new node is added and in a ready state by running this command: USD oc get nodes Example output USD oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.30.3 openshift-control-plane-1 Ready master 4h26m v1.30.3 openshift-control-plane-2 Ready master 12m v1.30.3 openshift-compute-0 Ready worker 3h58m v1.30.3 openshift-compute-1 Ready worker 3h58m v1.30.3 Turn the quorum guard back on by entering the following command: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command: USD oc get etcd/cluster -oyaml If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator: Example output EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again] Verification Verify that all etcd pods are running properly. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m If the output from the command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. To verify there are exactly three etcd members, connect to the running etcd container, passing in the name of a pod that was not on the affected node. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc rsh -n openshift-etcd etcd-openshift-control-plane-0 View the member list: sh-4.2# etcdctl member list -w table Example output +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ Note If the output from the command lists more than three etcd members, you must carefully remove the unwanted member. Verify that all etcd members are healthy by running the following command: # etcdctl endpoint health --cluster Example output https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms Validate that all nodes are at the latest revision by running the following command: USD oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' 5.2.5. Additional resources Quorum protection with machine lifecycle hooks 5.3. Disaster recovery 5.3.1. About disaster recovery The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state. Important Disaster recovery requires you to have at least one healthy control plane host. Restoring to a cluster state This solution handles situations where you want to restore your cluster to a state, for example, if an administrator deletes something critical. This also includes situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a state. If applicable, you might also need to recover from expired control plane certificates . Warning Restoring to a cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort. Prior to performing a restore, see About restoring cluster state for more information on the impact to the cluster. Note If you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member . Recovering from expired control plane certificates This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates. 5.3.2. Restoring to a cluster state To restore the cluster to a state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state. 5.3.2.1. About restoring cluster state You can use an etcd backup to restore your cluster to a state. This can be used to recover from the following situations: The cluster has lost the majority of control plane hosts (quorum loss). An administrator has deleted something critical and must restore to recover the cluster. Warning Restoring to a cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort. If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup. Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, persistent volume controllers, and OpenShift operators, including the network operator. It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues. In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates. 5.3.2.2. Restoring to a cluster state You can use a saved etcd backup to restore a cluster state or restore a cluster that has lost the majority of control plane hosts. Note If your cluster uses a control plane machine set, see "Troubleshooting the control plane machine set" for a more simple etcd recovery procedure. Important When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2. Prerequisites Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation. A healthy control plane host to use as the recovery host. SSH access to control plane hosts. A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz . Important For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one. Procedure Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on. Establish SSH connectivity to each of the control plane nodes, including the recovery host. kube-apiserver becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal. Important If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state. Copy the etcd backup directory to the recovery control plane host. This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host. Stop the static pods on any other control plane nodes. Note You do not need to stop the static pods on the recovery host. Access a control plane host that is not the recovery host. Move the existing etcd pod file out of the kubelet manifest directory by running: USD sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp Verify that the etcd pods are stopped by using: USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-apiserver file out of the kubelet manifest directory by running: USD sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp Verify that the kube-apiserver containers are stopped by running: USD sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-controller-manager file out of the kubelet manifest directory by using: USD sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp Verify that the kube-controller-manager containers are stopped by running: USD sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the existing kube-scheduler file out of the kubelet manifest directory by using: USD sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp Verify that the kube-scheduler containers are stopped by using: USD sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard" If the output of this command is not empty, wait a few minutes and check again. Move the etcd data directory to a different location with the following example: USD sudo mv -v /var/lib/etcd/ /tmp If the /etc/kubernetes/manifests/keepalived.yaml file exists and the node is deleted, follow these steps: Move the /etc/kubernetes/manifests/keepalived.yaml file out of the kubelet manifest directory: USD sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp Verify that any containers managed by the keepalived daemon are stopped: USD sudo crictl ps --name keepalived The output of this command should be empty. If it is not empty, wait a few minutes and check again. Check if the control plane has any Virtual IPs (VIPs) assigned to it: USD ip -o address | egrep '<api_vip>|<ingress_vip>' For each reported VIP, run the following command to remove it: USD sudo ip address del <reported_vip> dev <reported_vip_device> Repeat this step on each of the other control plane hosts that is not the recovery host. Access the recovery control plane host. If the keepalived daemon is in use, verify that the recovery control plane node owns the VIP: USD ip -o address | grep <api_vip> The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY environment variables. Tip You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml . The proxy is enabled if the httpProxy , httpsProxy , and noProxy fields have values set. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory: USD sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup Example script output ...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml The cluster-restore.sh script must show that etcd , kube-apiserver , kube-controller-manager , and kube-scheduler pods are stopped and then started at the end of the restore process. Note The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup. Check the nodes to ensure they are in the Ready state. Run the following command: USD oc get nodes -w Sample output NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.30.3 host-172-25-75-38 Ready infra,worker 3d20h v1.30.3 host-172-25-75-40 Ready master 3d20h v1.30.3 host-172-25-75-65 Ready master 3d20h v1.30.3 host-172-25-75-74 Ready infra,worker 3d20h v1.30.3 host-172-25-75-79 Ready worker 3d20h v1.30.3 host-172-25-75-86 Ready worker 3d20h v1.30.3 host-172-25-75-98 Ready infra,worker 3d20h v1.30.3 It can take several minutes for all nodes to report their state. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console. USD ssh -i <ssh-key-path> core@<master-hostname> Sample pki directory sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem Restart the kubelet service on all control plane hosts. From the recovery host, run: USD sudo systemctl restart kubelet.service Repeat this step on all other control plane hosts. Approve the pending Certificate Signing Requests (CSRs): Note Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step. Get the list of current CSRs by running: USD oc get csr Example output 1 2 A pending kubelet serving CSR, requested by the node for the kubelet serving endpoint. 3 4 A pending kubelet client CSR, requested with the node-bootstrapper node bootstrap credentials. Review the details of a CSR to verify that it is valid by running: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR by running: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet service CSR by running: USD oc adm certificate approve <csr_name> Verify that the single member control plane has started successfully. From the recovery host, verify that the etcd container is running by using: USD sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard" Example output 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0 From the recovery host, verify that the etcd pod is running by using: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s If the status is Pending , or the output lists more than one running etcd pod, wait a few minutes and check again. If you are using the OVNKubernetes network plugin, you must restart ovnkube-controlplane pods. Delete all of the ovnkube-controlplane pods by running: USD oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane Verify that all of the ovnkube-controlplane pods were redeployed by using: USD oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all the nodes one by one. Use the following steps to restart OVN-Kubernetes pods on each node: Important Restart OVN-Kubernetes pods in the following order The recovery control plane host The other control plane hosts (if available) The other nodes Note Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the failurePolicy set to Fail , then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again. Alternatively, you can temporarily set the failurePolicy to Ignore while restoring the cluster state. After the cluster state is restored successfully, you can set the failurePolicy to Fail . Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run: USD sudo rm -f /var/lib/ovn-ic/etc/*.db Restart the OpenVSwitch services. Access the node by using Secure Shell (SSH) and run the following command: USD sudo systemctl restart ovs-vswitchd ovsdb-server Delete the ovnkube-node pod on the node by running the following command, replacing <node> with the name of the node that you are restarting: USD oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node> Check the status of the OVN pods by running the following command: USD oc get po -n openshift-ovn-kubernetes If any OVN pods are in the Terminating status, delete the node that is running that OVN pod by running the following command. Replace <node> with the name of the node you are deleting: USD oc delete node <node> Use SSH to log in to the OVN pod node with the Terminating status by running the following command: USD ssh -i <ssh-key-path> core@<node> Move all PEM files from the /var/lib/kubelet/pki directory by running the following command: USD sudo mv /var/lib/kubelet/pki/* /tmp Restart the kubelet service by running the following command: USD sudo systemctl restart kubelet.service Return to the recovery etcd machines by running the following command: USD oc get csr Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending Approve all new CSRs by running the following command, replacing csr-<uuid> with the name of the CSR: oc adm certificate approve csr-<uuid> Verify that the node is back by running the following command: USD oc get nodes Verify that the ovnkube-node pod is running again with: USD oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node> Note It might take several minutes for the pods to restart. Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up. If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal". Warning Do not delete and re-create the machine for the recovery host. If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps: Warning Do not delete and re-create the machine for the recovery host. For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node". Obtain the machine for one of the lost control plane hosts. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal . Delete the machine of the lost control plane host by running: USD oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1 1 Specify the name of the control plane machine for the lost control plane host. A new machine is automatically provisioned after deleting the machine of the lost control plane host. Verify that a new machine has been created by running: USD oc get machines -n openshift-machine-api -o wide Example output: NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running 1 The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running . It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state. Repeat these steps for each lost control plane host that is not the recovery host. Turn off the quorum guard by entering: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}' This command ensures that you can successfully re-create secrets and roll out the static pods. In a separate terminal window within the recovery host, export the recovery kubeconfig file by running: USD export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig Force etcd redeployment. In the same terminal window where you exported the recovery kubeconfig file, run: USD oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge 1 1 The forceRedeploymentReason value must be unique, which is why a timestamp is appended. The etcd redeployment starts. When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up. Turn the quorum guard back on by entering: USD oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}' You can verify that the unsupportedConfigOverrides section is removed from the object by running: USD oc get etcd/cluster -oyaml Verify all nodes are updated to the latest revision. In a terminal that has access to the cluster as a cluster-admin user, run: USD oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. After etcd is redeployed, force new rollouts for the control plane. kube-apiserver will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer. In a terminal that has access to the cluster as a cluster-admin user, run: Force a new rollout for kube-apiserver : USD oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision. USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the Kubernetes controller manager by running the following command: USD oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision by running: USD oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Force a new rollout for the kube-scheduler by running: USD oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date --rfc-3339=ns )"'"}}' --type=merge Verify all nodes are updated to the latest revision by using: USD oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 7 1 1 In this example, the latest revision number is 7 . If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7 , this means that the update is still in progress. Wait a few minutes and try again. Monitor the platform Operators by running: USD oc adm wait-for-stable-cluster This process can take up to 15 minutes. Verify that all control plane hosts have started and joined the cluster. In a terminal that has access to the cluster as a cluster-admin user, run the following command: USD oc -n openshift-etcd get pods -l k8s-app=etcd Example output etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h To ensure that all workloads return to normal operation following a recovery procedure, restart all control plane nodes. Note On completion of the procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted. Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig Issue the following command to display your authenticated user name: USD oc whoami 5.3.2.3. Additional resources Installing a user-provisioned cluster on bare metal Creating a bastion host to access OpenShift Container Platform instances and the control plane nodes with SSH Replacing a bare-metal control plane node 5.3.2.4. Issues and workarounds for restoring a persistent storage state If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated. Important The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa. The following are some example scenarios that produce an out-of-date status: MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume. Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start. Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators. A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist. To fix this problem, an administrator must: Manually remove the PVs with invalid devices. Remove symlinks from respective nodes. Delete LocalVolume or LocalVolumeSet objects (see Storage Configuring persistent storage Persistent storage using local volumes Deleting the Local Storage Operator Resources ). 5.3.3. Recovering from expired control plane certificates 5.3.3.1. Recovering from expired control plane certificates The cluster can automatically recover from expired control plane certificates. However, you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs. Use the following steps to approve the pending CSRs: Procedure Get the list of current CSRs: USD oc get csr Example output 1 A pending kubelet service CSR (for user-provisioned installations). 2 A pending node-bootstrapper CSR. Review the details of a CSR to verify that it is valid: USD oc describe csr <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. Approve each valid node-bootstrapper CSR: USD oc adm certificate approve <csr_name> For user-provisioned installations, approve each valid kubelet serving CSR: USD oc adm certificate approve <csr_name>
[ "oc debug --as-root node/<node_name>", "sh-4.4# chroot /host", "export HTTP_PROXY=http://<your_proxy.example.com>:8080", "export HTTPS_PROXY=https://<your_proxy.example.com>:8080", "export NO_PROXY=<example.com>", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade", "oc apply -f enable-tech-preview-no-upgrade.yaml", "oc get crd | grep backup", "backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem", "oc apply -f etcd-backup-pvc.yaml", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s", "apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1", "oc apply -f etcd-single-backup.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate", "oc apply -f etcd-backup-local-storage.yaml", "apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1", "oc apply -f etcd-backup-pvc.yaml", "apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1", "oc apply -f etcd-single-backup.yaml", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage", "oc apply -f etcd-backup-pvc.yaml", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate", "oc apply -f etcd-backup-local-storage.yaml", "apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2", "oc get nodes", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage", "oc apply -f etcd-backup-pvc.yaml", "apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: \"20 4 * * *\" 1 timeZone: \"UTC\" pvcName: etcd-backup-pvc", "spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2", "spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1", "oc create -f etcd-recurring-backup.yaml", "oc get cronjob -n openshift-etcd", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"EtcdMembersAvailable\")]}{.message}{\"\\n\"}'", "2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy", "oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{\"\\t\"}{@.status.providerStatus.instanceState}{\"\\n\"}' | grep -v running", "ip-10-0-131-183.ec2.internal stopped 1", "oc get nodes -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\"\\t\"}{range .spec.taints[*]}{.key}{\" \"}' | grep unreachable", "ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1", "oc get nodes -l node-role.kubernetes.io/master | grep \"NotReady\"", "ip-10-0-131-183.ec2.internal NotReady master 122m v1.30.3 1", "oc get nodes -l node-role.kubernetes.io/master", "NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.30.3 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.30.3 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.30.3", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "sh-4.2# etcdctl member remove 6fc1e7c9db35841d", "Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc delete node <node_name>", "oc delete node ip-10-0-131-183.ec2.internal", "oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1", "etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m", "oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc debug node/ip-10-0-131-183.ec2.internal 1", "sh-4.2# chroot /host", "sh-4.2# mkdir /var/lib/etcd-backup", "sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/", "sh-4.2# mv /var/lib/etcd/ /tmp", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "sh-4.2# etcdctl member remove 62bcf33650a7170a", "Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346", "sh-4.2# etcdctl member list -w table", "+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1", "etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m", "oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal", "oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"single-master-recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal", "sh-4.2# etcdctl endpoint health", "https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>", "oc rsh -n openshift-etcd etcd-openshift-control-plane-0", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+", "sh-4.2# etcdctl member remove 7a8197040a5126c8", "Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc get secrets -n openshift-etcd | grep openshift-control-plane-2", "etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m", "oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret \"etcd-peer-openshift-control-plane-2\" deleted", "oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-metrics-openshift-control-plane-2\" deleted", "oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-openshift-control-plane-2\" deleted", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get clusteroperator baremetal", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.17.0 True False False 3d15h", "oc delete bmh openshift-control-plane-2 -n openshift-machine-api", "baremetalhost.metal3.io \"openshift-control-plane-2\" deleted", "oc delete machine -n openshift-machine-api examplecluster-control-plane-2", "oc edit machine -n openshift-machine-api examplecluster-control-plane-2", "finalizers: - machine.machine.openshift.io", "machine.machine.openshift.io/examplecluster-control-plane-2 edited", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.30.3 openshift-control-plane-1 Ready master 3h24m v1.30.3 openshift-compute-0 Ready worker 176m v1.30.3 openshift-compute-1 Ready worker 176m v1.30.3", "cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF", "oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned", "oc get bmh -n openshift-machine-api", "oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m", "oc get nodes", "oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.30.3 openshift-control-plane-1 Ready master 4h26m v1.30.3 openshift-control-plane-2 Ready master 12m v1.30.3 openshift-compute-0 Ready worker 3h58m v1.30.3 openshift-compute-1 Ready worker 3h58m v1.30.3", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc rsh -n openshift-etcd etcd-openshift-control-plane-0", "sh-4.2# etcdctl member list -w table", "+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+", "etcdctl endpoint health --cluster", "https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms", "oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision", "sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp", "sudo crictl ps | grep kube-controller-manager | egrep -v \"operator|guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp", "sudo crictl ps | grep kube-scheduler | egrep -v \"operator|guard\"", "sudo mv -v /var/lib/etcd/ /tmp", "sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp", "sudo crictl ps --name keepalived", "ip -o address | egrep '<api_vip>|<ingress_vip>'", "sudo ip address del <reported_vip> dev <reported_vip_device>", "ip -o address | grep <api_vip>", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.30.3 host-172-25-75-38 Ready infra,worker 3d20h v1.30.3 host-172-25-75-40 Ready master 3d20h v1.30.3 host-172-25-75-65 Ready master 3d20h v1.30.3 host-172-25-75-74 Ready infra,worker 3d20h v1.30.3 host-172-25-75-79 Ready worker 3d20h v1.30.3 host-172-25-75-86 Ready worker 3d20h v1.30.3 host-172-25-75-98 Ready infra,worker 3d20h v1.30.3", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane", "oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane", "sudo rm -f /var/lib/ovn-ic/etc/*.db", "sudo systemctl restart ovs-vswitchd ovsdb-server", "oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>", "oc get po -n openshift-ovn-kubernetes", "oc delete node <node>", "ssh -i <ssh-key-path> core@<node>", "sudo mv /var/lib/kubelet/pki/* /tmp", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending", "adm certificate approve csr-<uuid>", "oc get nodes", "oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc adm wait-for-stable-cluster", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "export KUBECONFIG=<installation_directory>/auth/kubeconfig", "oc whoami", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 2 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/backup_and_restore/control-plane-backup-and-restore
Postinstallation configuration
Postinstallation configuration OpenShift Container Platform 4.18 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}'", "dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc get machine -n openshift-machine-api", "NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m", "oc edit machines -n openshift-machine-api <control_plane_name> 1", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc edit configs.imageregistry/cluster", "spec: # storage: azure: # networkAccess: type: Internal", "oc get configs.imageregistry/cluster -o=jsonpath=\"{.spec.storage.azure.privateEndpointName}\" -w", "oc patch configs.imageregistry cluster --type=merge -p '{\"spec\":{\"disableRedirect\": true}}'", "oc get imagestream -n openshift", "NAME IMAGE REPOSITORY TAGS UPDATED cli image-registry.openshift-image-registry.svc:5000/openshift/cli latest 8 hours ago", "oc debug node/<node_name>", "chroot /host", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools", "Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "oc edit configs.imageregistry/cluster", "spec: # storage: azure: # networkAccess: type: Internal internal: subnetName: <subnet_name> vnetName: <vnet_name> networkResourceGroupName: <network_resource_group_name>", "oc get configs.imageregistry/cluster -o=jsonpath=\"{.spec.storage.azure.privateEndpointName}\" -w", "oc get imagestream -n openshift", "NAME IMAGE REPOSITORY TAGS UPDATED cli image-registry.openshift-image-registry.svc:5000/openshift/cli latest 8 hours ago", "oc debug node/<node_name>", "chroot /host", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools", "Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "oc patch configs.imageregistry cluster --type=merge -p '{\"spec\":{\"disableRedirect\": true}}'", "oc get imagestream -n openshift", "NAME IMAGE REPOSITORY TAGS UPDATED cli default-route-openshift-image-registry.<cluster_dns>/cli latest 8 hours ago", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) default-route-openshift-image-registry.<cluster_dns>", "Login Succeeded!", "podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns> /openshift/tools", "Trying to pull default-route-openshift-image-registry.<cluster_dns>/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "az login", "az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1", "az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".url')", "BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.aarch64.vhd", "end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`", "sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`", "az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}", "az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy", "{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }", "az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}", "az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2", "RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)", "az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}", "az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0", "/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0", "az login", "az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1", "az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".url')", "BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.x86_64.vhd", "end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`", "sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`", "az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}", "az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy", "{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }", "az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}", "az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-x86_64 --publisher RedHat --offer x86_64 --sku x86_64 --os-type linux --architecture x64 --hyper-v-generation V2", "RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)", "az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}", "az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-x86_64 -e 1.0.0", "/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-x86_64/versions/1.0.0", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: \"<zone>\"", "oc create -f <file_name> 1", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-subnet-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data", "oc get -o jsonpath=\"{.status.infrastructureName}{'\\n'}\" infrastructure cluster", "oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.<arch>.images.aws.regions.\"<region>\".image'", "oc create -f <file_name> 1", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.images.gcp'", "\"gcp\": { \"release\": \"415.92.202309142014-0\", \"project\": \"rhcos-cloud\", \"name\": \"rhcos-415-92-202309142014-0-gcp-aarch64\" }", "projects/<project>/global/images/<image_name>", "oc create -f <file_name> 1", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<http_server>/worker.ign", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://<http_server>/worker.ign coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<http_server>/worker.ign", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://<http_server>/worker.ign coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')", "virt-install --connect qemu:///system --name <vm_name> --autostart --os-variant rhel9.4 \\ 1 --cpu host --vcpus <vcpus> --memory <memory_mb> --disk <vm_name>.qcow2,size=<image_size> --network network=<virt_network_parm> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 2 --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/vda\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/worker.ign \" \\ 3 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 4 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none\" \\ 5 --extra-args \"nameserver=<dns>\" --extra-args \"console=ttysclp0\" --noautoconsole --wait", "osinfo-query os -f short-id", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.31.3 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.31.3 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.31.3 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.31.3 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.31.3 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.31.3 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.31.3 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # spec: # template: # spec: # taints: - effect: NoSchedule key: multiarch.openshift.io/arch value: arm64", "oc adm taint nodes <node-name> multiarch.openshift.io/arch=arm64:NoSchedule", "oc annotate namespace my-namespace 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"multiarch.openshift.io/arch\"}]'", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: tolerations: - key: \"multiarch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: \"multiarch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"", "oc label node <node_name> <label>", "oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages=", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-64k-pages spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-64k-pages nodeSelector: matchLabels: node-role.kubernetes.io/worker-64k-pages: \"\" kubernetes.io/arch: arm64", "oc create -f <filename>.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker-64k-pages\" 1 name: 99-worker-64kpages spec: kernelType: 64k-pages 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m", "oc patch is/cli-artifacts -n openshift -p '{\"spec\":{\"tags\":[{\"name\":\"latest\",\"importPolicy\":{\"importMode\":\"PreserveOriginal\"}}]}}'", "oc get istag cli-artifacts:latest -n openshift -oyaml", "dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux", "oc create ns openshift-multiarch-tuning-operator", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: {}", "oc create -f <file_name> 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: channel: stable name: multiarch-tuning-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic startingCSV: multiarch-tuning-operator.v1.0.0", "oc create -f <file_name> 1", "oc get csv -n openshift-multiarch-tuning-operator", "NAME DISPLAY VERSION REPLACES PHASE multiarch-tuning-operator.v1.0.0 Multiarch Tuning Operator 1.0.0 multiarch-tuning-operator.v0.9.0 Succeeded", "oc get operatorgroup -n openshift-multiarch-tuning-operator", "NAME AGE openshift-multiarch-tuning-operator-q8zbb 133m", "oc get subscription -n openshift-multiarch-tuning-operator", "NAME PACKAGE SOURCE CHANNEL multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable", "apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster 1 spec: logVerbosityLevel: Normal 2 namespaceSelector: 3 matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist", "namespaceSelector: matchExpressions: - key: multiarch.openshift.io/include-pod-placement operator: Exists", "apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster spec: logVerbosityLevel: Normal namespaceSelector: matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist", "oc create -f <file_name> 1", "oc get clusterpodplacementconfig", "NAME AGE cluster 29s", "oc delete clusterpodplacementconfig cluster", "oc get clusterpodplacementconfig", "No resources found", "oc get subscription.operators.coreos.com -n <namespace> 1", "NAME PACKAGE SOURCE CHANNEL openshift-multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable", "oc get subscription.operators.coreos.com <subscription_name> -n <namespace> -o yaml | grep currentCSV 1", "currentCSV: multiarch-tuning-operator.v1.0.0", "oc delete subscription.operators.coreos.com <subscription_name> -n <namespace> 1", "subscription.operators.coreos.com \"openshift-multiarch-tuning-operator\" deleted", "oc delete clusterserviceversion <currentCSV_value> -n <namespace> 1", "clusterserviceversion.operators.coreos.com \"multiarch-tuning-operator.v1.0.0\" deleted", "oc get csv -n <namespace> 1", "No resources found in openshift-multiarch-tuning-operator namespace.", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.31.3", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.31.3", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Exists 6 value: reserved 7", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.31.3", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: <gpu_type> 7 min: 0 8 max: 16 9 logVerbosity: 4 10 scaleDown: 11 enabled: true 12 delayAfterAdd: 10m 13 delayAfterDelete: 5m 14 delayAfterFailure: 30s 15 unneededTime: 5m 16 utilizationThreshold: \"0.4\" 17 expanders: [\"Random\"] 18", "oc create -f <filename>.yaml 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v1\" 1", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s", "oc describe mc <name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.31.3", "oc debug node/<node_name>", "sh-4.4# chroot /host", "stat -c %T -f /sys/fs/cgroup", "cgroup2fs", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit apiserver", "spec: encryption: type: aesgcm 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc debug --as-root node/<node_name>", "sh-4.4# chroot /host", "export HTTP_PROXY=http://<your_proxy.example.com>:8080", "export HTTPS_PROXY=https://<your_proxy.example.com>:8080", "export NO_PROXY=<example.com>", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "sudo -E /usr/local/bin/disable-etcd.sh", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/<etcd-backup-directory>", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "oc adm wait-for-stable-cluster", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD(date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1", "oc create -f pod-disruption-budget.yaml", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.18-for-rhel-8-x86_64-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.18-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "oc project openshift-machine-api", "oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"https:....\" } ] }, \"security\": { \"tls\": { \"certificateAuthorities\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,.....==\" } ] } }, \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/nvme1n1\", 1 \"partitions\": [ { \"label\": \"var\", \"sizeMiB\": 50000, 2 \"startMiB\": 0 3 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var\", 4 \"format\": \"xfs\", 5 \"path\": \"/var\" 6 } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var\\nWhat=/dev/disk/by-partlabel/var\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", \"enabled\": true, \"name\": \"var.mount\" } ] } }", "oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4", "oc create -f <file-name>.yaml", "oc get machineset", "NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.31.3 ip-10-0-146-113.ec2.internal Ready master 127m v1.31.3 ip-10-0-153-35.ec2.internal Ready worker 118m v1.31.3 ip-10-0-176-58.ec2.internal Ready master 126m v1.31.3 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.31.3 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.31.3 ip-10-0-245-59.ec2.internal Ready worker 116m v1.31.3", "oc debug node/<node-name> -- chroot /host lsblk", "oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}", "oc get kubeletconfig", "NAME AGE set-kubelet-config 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1", "oc label machineconfigpool worker custom-kubelet=set-kubelet-config", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=set-kubelet-config", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-kubelet-config 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-kubelet-config -o yaml", "spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc edit machineconfigpool worker", "spec: maxUnavailable: <node_count>", "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc new-project <project_name>", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2", "NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m", "oc debug node/perf-node.example.com", "sh-4.2# systemctl status | grep -B5 pause", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope", "for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus", "oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 key1=value1:NoExecute", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit machineset <machineset>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc adm taint nodes node1 dedicated=groupName:NoSchedule", "kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #", "oc adm taint nodes <node-name> disktype=ssd:NoSchedule", "oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule", "kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #", "oc adm taint nodes <node-name> <key>-", "oc adm taint nodes ip-10-0-132-248.ec2.internal key1-", "node/ip-10-0-132-248.ec2.internal untainted", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - gateway: 192.168.204.1 1 ipAddrs: - 192.168.204.8/24 2 nameservers: 3 - 192.168.204.1 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: \"\" template: <vm_template_name> userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_ip> status: {}", "oc create -f <file_name>.yaml", "oc create -f <ipaddressclaim_filename>", "kind: IPAddressClaim metadata: finalizers: - machine.openshift.io/ip-claim-protection name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 namespace: openshift-machine-api spec: poolRef: apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool status: {}", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/memoryMb: \"8192\" machine.openshift.io/vCPU: \"4\" labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: replicas: 0 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: ipam: \"true\" machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: {} network: devices: - addressesFromPools: 1 - group: ipamcontroller.example.io name: static-ci-pool resource: IPPool nameservers: - \"192.168.204.1\" 2 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: \"\" template: rvanderp4-dev-9n5wg-rhcos-generated-region-generated-zone userDataSecret: name: worker-user-data workspace: datacenter: IBMCdatacenter datastore: /IBMCdatacenter/datastore/vsanDatastore folder: /IBMCdatacenter/vm/rvanderp4-dev-9n5wg resourcePool: /IBMCdatacenter/host/IBMCcluster//Resources server: vcenter.ibmc.devcluster.openshift.com", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc get ipaddressclaims.ipam.cluster.x-k8s.io -n openshift-machine-api", "NAME POOL NAME POOL KIND cluster-dev-9n5wg-worker-0-m7529-claim-0-0 static-ci-pool IPPool cluster-dev-9n5wg-worker-0-wdqkt-claim-0-0 static-ci-pool IPPool", "oc create -f ipaddress.yaml", "apiVersion: ipam.cluster.x-k8s.io/v1alpha1 kind: IPAddress metadata: name: cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0 namespace: openshift-machine-api spec: address: 192.168.204.129 claimRef: 1 name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 gateway: 192.168.204.1 poolRef: 2 apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool prefix: 23", "oc --type=merge patch IPAddressClaim cluster-dev-9n5wg-worker-0-m7529-claim-0-0 -p='{\"status\":{\"addressRef\": {\"name\": \"cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0\"}}}' -n openshift-machine-api --subresource=status", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=USD(oc adm release info --image-for must-gather)", "get imagestreams -nopenshift", "oc get is <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "oc get is ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12", "oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift", "oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift", "get imagestream <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "get imagestream ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated", "oc apply -f add-<cluster_role>.yaml", "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system", "oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.18 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2", "//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>", "<service_account_name>@<project_id>.iam.gserviceaccount.com", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>", "oc adm wait-for-stable-cluster --minimum-stable-period=5s", "INFRA_ID=USD(oc get infrastructures cluster -o jsonpath='{.status.infrastructureName}') CLUSTER_NAME=USD{INFRA_ID%-*} 1", "AWS_BUCKET=USD(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} | awk -F'://' '{printUSD2}' |awk -F'.' '{printUSD1}')", "basename USD(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} )", "<subdomain>.cloudfront.net", "aws cloudfront list-distributions --query \"DistributionList.Items[].{DomainName: DomainName, OriginDomainName: Origins.Items[0].DomainName}[?contains(DomainName, '<subdomain>.cloudfront.net')]\"", "[ { \"DomainName\": \"<subdomain>.cloudfront.net\", \"OriginDomainName\": \"<s3_bucket>.s3.us-east-2.amazonaws.com\" } ]", "AWS_BUCKET=USD<s3_bucket>", "TEMPDIR=USD(mktemp -d)", "oc delete secrets/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator", "oc get secret/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator -ojsonpath='{ .data.service-account\\.pub }' | base64 -d > USD{TEMPDIR}/serviceaccount-signer.public", "ccoctl aws create-identity-provider --dry-run \\ 1 --output-dir USD{TEMPDIR} --name fake \\ 2 --region us-east-1 3", "cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json", "aws s3api get-object --bucket USD{AWS_BUCKET} --key keys.json USD{TEMPDIR}/jwks.current.json", "jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json", "aws s3api put-object --bucket USD{AWS_BUCKET} --tagging \"openshift.io/cloud-credential-operator/USD{CLUSTER_NAME}=owned\" --key keys.json --body USD{TEMPDIR}/jwks.combined.json", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "aws s3api put-object --bucket USD{AWS_BUCKET} --tagging \"openshift.io/cloud-credential-operator/USD{CLUSTER_NAME}=owned\" --key keys.json --body USD{TEMPDIR}/jwks.new.json", "oc adm wait-for-stable-cluster --minimum-stable-period=5s", "CURRENT_ISSUER=USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') GCP_BUCKET=USD(echo USD{CURRENT_ISSUER} | cut -d \"/\" -f4)", "TEMPDIR=USD(mktemp -d)", "oc delete secrets/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator", "oc get secret/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator -ojsonpath='{ .data.service-account\\.pub }' | base64 -d > USD{TEMPDIR}/serviceaccount-signer.public", "ccoctl gcp create-workload-identity-provider --dry-run \\ 1 --output-dir=USD{TEMPDIR} --name fake \\ 2 --project fake --workload-identity-pool fake", "cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json", "gcloud storage cp gs://USD{GCP_BUCKET}/keys.json USD{TEMPDIR}/jwks.current.json", "jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json", "gcloud storage cp USD{TEMPDIR}/jwks.combined.json gs://USD{GCP_BUCKET}/keys.json", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "gcloud storage cp USD{TEMPDIR}/jwks.new.json gs://USD{GCP_BUCKET}/keys.json", "oc adm wait-for-stable-cluster --minimum-stable-period=5s", "CURRENT_ISSUER=USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') AZURE_STORAGE_ACCOUNT=USD(echo USD{CURRENT_ISSUER} | cut -d \"/\" -f3 | cut -d \".\" -f1) AZURE_STORAGE_CONTAINER=USD(echo USD{CURRENT_ISSUER} | cut -d \"/\" -f4)", "TEMPDIR=USD(mktemp -d)", "oc delete secrets/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator", "oc get secret/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator -ojsonpath='{ .data.service-account\\.pub }' | base64 -d > USD{TEMPDIR}/serviceaccount-signer.public", "ccoctl aws create-identity-provider \\ 1 --dry-run \\ 2 --output-dir USD{TEMPDIR} --name fake \\ 3 --region us-east-1 4", "cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json", "az storage blob download --container-name USD{AZURE_STORAGE_CONTAINER} --account-name USD{AZURE_STORAGE_ACCOUNT} --name 'openid/v1/jwks' -f USD{TEMPDIR}/jwks.current.json", "jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json", "az storage blob upload --overwrite --account-name USD{AZURE_STORAGE_ACCOUNT} --container-name USD{AZURE_STORAGE_CONTAINER} --name 'openid/v1/jwks' -f USD{TEMPDIR}/jwks.combined.json", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "az storage blob upload --overwrite --account-name USD{AZURE_STORAGE_ACCOUNT} --container-name USD{AZURE_STORAGE_CONTAINER} --name 'openid/v1/jwks' -f USD{TEMPDIR}/jwks.new.json", "ccoctl <provider_name> refresh-keys \\ 1 --kubeconfig <openshift_kubeconfig_file> \\ 2 --credentials-requests-dir <path_to_credential_requests_directory> \\ 3 --name <name> 4", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc get configmap --namespace openshift-kube-apiserver bound-sa-token-signing-certs --output 'go-template={{index .data \"service-account-001.pub\"}}' > ./output_dir/serviceaccount-signer.public 1", "./ccoctl azure create-oidc-issuer --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --tenant-id <azure_tenant_id> --public-key-file ./output_dir/serviceaccount-signer.public 4", "ll ./output_dir/manifests", "total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml 1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml", "OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print USD2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`", "oc patch authentication cluster --type=merge -p \"{\\\"spec\\\":{\\\"serviceAccountIssuer\\\":\\\"USD{OIDC_ISSUER_URL}\\\"}}\"", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc patch cloudcredential cluster --type=merge --patch '{\"spec\":{\"credentialsMode\":\"Manual\"}}'", "oc adm release extract --credentials-requests --included --to <path_to_directory_for_credentials_requests> --registry-config ~/.pull-secret", "AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`", "ccoctl azure create-managed-identities --name <azure_infra_name> --output-dir ./output_dir --region <azure_region> --subscription-id <azure_subscription_id> --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 1 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\"", "oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml", "find ./output_dir/manifests -iname \"openshift*yaml\" -print0 | xargs -I {} -0 -t oc replace -f {}", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc delete secret -n kube-system azure-credentials", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/postinstallation_configuration/index
19.6. Virtualization
19.6. Virtualization Virtualization Getting Started Guide The Virtualization Getting Started Guide is an introduction to virtualization on Red Hat Enterprise Linux 7. Virtualization Deployment and Administration Guide The Virtualization Deployment and Administration Guide provides information on installing, configuring, and managing virtualization on Red Hat Enterprise Linux 7. Virtualization Security Guide The Virtualization Security Guide provides an overview of virtualization security technologies provided by Red Hat, and provides recommendations for securing virtualization hosts, guests, and shared infrastructure and resources in virtualized environments. Virtualization Tuning and Optimization Guide The Virtualization Tuning and Optimization Guide covers KVM and virtualization performance. Within this guide you can find tips and suggestions for making full use of KVM performance features and options for your host systems and virtualized guests.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-documentation-virtualization
Securing Red Hat Quay
Securing Red Hat Quay Red Hat Quay 3 Securing Red Hat Quay Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/securing_red_hat_quay/index
Chapter 6. Understanding identity provider configuration
Chapter 6. Understanding identity provider configuration The OpenShift Container Platform master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to specify an identity provider after you install your cluster. 6.1. About identity providers in OpenShift Container Platform By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a custom resource (CR) that describes that identity provider and add it to the cluster. Note OpenShift Container Platform user names containing / , : , and % are not supported. 6.2. Supported identity providers You can configure the following types of identity providers: Identity provider Description htpasswd Configure the htpasswd identity provider to validate user names and passwords against a flat file generated using htpasswd . Keystone Configure the keystone identity provider to integrate your OpenShift Container Platform cluster with Keystone to enable shared authentication with an OpenStack Keystone v3 server configured to store users in an internal database. LDAP Configure the ldap identity provider to validate user names and passwords against an LDAPv3 server, using simple bind authentication. Basic authentication Configure a basic-authentication identity provider for users to log in to OpenShift Container Platform with credentials validated against a remote identity provider. Basic authentication is a generic backend integration mechanism. Request header Configure a request-header identity provider to identify users from request header values, such as X-Remote-User . It is typically used in combination with an authenticating proxy, which sets the request header value. GitHub or GitHub Enterprise Configure a github identity provider to validate user names and passwords against GitHub or GitHub Enterprise's OAuth authentication server. GitLab Configure a gitlab identity provider to use GitLab.com or any other GitLab instance as an identity provider. Google Configure a google identity provider using Google's OpenID Connect integration . OpenID Connect Configure an oidc identity provider to integrate with an OpenID Connect identity provider using an Authorization Code Flow . Once an identity provider has been defined, you can use RBAC to define and apply permissions . 6.3. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system 6.4. Identity provider parameters The following parameters are common to all identity providers: Parameter Description name The provider name is prefixed to provider user names to form an identity name. mappingMethod Defines how new identities are mapped to users when they log in. Enter one of the following values: claim The default value. Provisions a user with the identity's preferred user name. Fails if a user with that user name is already mapped to another identity. lookup Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. add Provisions a user with the identity's preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names. Note When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add . 6.5. Sample identity provider CR The following custom resource (CR) shows the parameters and default values that you use to configure an identity provider. This example uses the htpasswd identity provider. Sample identity provider CR apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3 1 This provider name is prefixed to provider user names to form an identity name. 2 Controls how mappings are established between this provider's identities and User objects. 3 An existing secret containing a file generated using htpasswd . 6.6. Manually provisioning a user when using the lookup mapping method Typically, identities are automatically mapped to users during login. The lookup mapping method disables this automatic mapping, which requires you to provision users manually. If you are using the lookup mapping method, use the following procedure for each user after configuring the identity provider. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create an OpenShift Container Platform user: USD oc create user <username> Create an OpenShift Container Platform identity: USD oc create identity <identity_provider>:<identity_provider_user_id> Where <identity_provider_user_id> is a name that uniquely represents the user in the identity provider. Create a user identity mapping for the created user and identity: USD oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username> Additional resources How to create user, identity and map user and identity in LDAP authentication for mappingMethod as lookup inside the OAuth manifest How to create user, identity and map user and identity in OIDC authentication for mappingMethod as lookup
[ "oc delete secrets kubeadmin -n kube-system", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc create user <username>", "oc create identity <identity_provider>:<identity_provider_user_id>", "oc create useridentitymapping <identity_provider>:<identity_provider_user_id> <username>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authentication_and_authorization/understanding-identity-provider
4.2.5. Monitoring Reads and Writes to a File
4.2.5. Monitoring Reads and Writes to a File This section describes how to monitor reads from and writes to a file in real time. inodewatch.stp inodewatch.stp takes the following information about the file as arguments on the command line: The file's major device number. The file's minor device number. The file's inode number. To get this information, use stat -c '%D %i' filename , where filename is an absolute path. For instance: if you wish to monitor /etc/crontab , run stat -c '%D %i' /etc/crontab first. This gives the following output: 805 is the base-16 (hexadecimal) device number. The lower two digits are the minor device number and the upper digits are the major number. 1078319 is the inode number. To start monitoring /etc/crontab , run stap inodewatch.stp 0x8 0x05 1078319 (The 0x prefixes indicate base-16 values). The output of this command contains the name and ID of any process performing a read/write, the function it is performing (that is vfs_read or vfs_write ), the device number (in hex format), and the inode number. Example 4.9, "inodewatch.stp Sample Output" contains the output of stap inodewatch.stp 0x8 0x05 1078319 (when cat /etc/crontab is executed while the script is running) : Example 4.9. inodewatch.stp Sample Output
[ "#! /usr/bin/env stap probe vfs.write, vfs.read { # dev and ino are defined by vfs.write and vfs.read if (dev == MKDEV(USD1,USD2) # major/minor device && ino == USD3) printf (\"%s(%d) %s 0x%x/%u\\n\", execname(), pid(), probefunc(), dev, ino) }", "805 1078319", "cat(16437) vfs_read 0x800005/1078319 cat(16437) vfs_read 0x800005/1078319" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/inodewatchsect
Preface
Preface After installing Red Hat Ansible Automation Platform, your system might need extra configuration to ensure your deployment runs smoothly. This guide provides procedures for configuration tasks that you can perform after installing Red Hat Ansible Automation Platform.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operations_guide/pr01
Chapter 8. Setting up metrics and dashboards for AMQ Streams
Chapter 8. Setting up metrics and dashboards for AMQ Streams You can use Prometheus and Grafana to monitor your AMQ Streams deployment. You can monitor your AMQ Streams deployment by viewing key metrics on dashboards and setting up alerts that trigger under certain conditions. Metrics are available for each of the components of AMQ Streams. To provide metrics information, AMQ Streams uses Prometheus rules and Grafana dashboards. When configured with a set of rules for each component of AMQ Streams, Prometheus consumes key metrics from the pods that are running in your cluster. Grafana then visualizes those metrics on dashboards. AMQ Streams includes example Grafana dashboards that you can customize to suit your deployment. AMQ Streams employs monitoring for user-defined projects (an OpenShift feature) to simplify the Prometheus setup process. Depending on your requirements, you can: Set up and deploy Prometheus to expose metrics Deploy Kafka Exporter to provide additional metrics Use Grafana to present the Prometheus metrics With Prometheus and Grafana set up, you can use the example Grafana dashboards provided by AMQ Streams for monitoring. Additionally, you can configure your deployment to track messages end-to-end by setting up distributed tracing . Note AMQ Streams provides example installation files for Prometheus and Grafana. You can use these files as a starting point when trying out monitoring of AMQ Streams. For further support, try engaging with the Prometheus and Grafana developer communities. Supporting documentation for metrics and monitoring tools For more information on the metrics and monitoring tools, refer to the supporting documentation: Prometheus Prometheus configuration Kafka Exporter Grafana Labs Apache Kafka Monitoring describes JMX metrics exposed by Apache Kafka ZooKeeper JMX describes JMX metrics exposed by Apache ZooKeeper 8.1. Monitoring consumer lag with Kafka Exporter Kafka Exporter is an open source project to enhance monitoring of Apache Kafka brokers and clients. You can configure the Kafka resource to deploy Kafka Exporter with your Kafka cluster . Kafka Exporter extracts additional metrics data from Kafka brokers related to offsets, consumer groups, consumer lag, and topics. The metrics data is used, for example, to help identify slow consumers. Lag data is exposed as Prometheus metrics, which can then be presented in Grafana for analysis. Important Kafka Exporter provides only additional metrics related to consumer lag and consumer offsets. For regular Kafka metrics, you have to configure the Prometheus metrics in Kafka brokers . Consumer lag indicates the difference in the rate of production and consumption of messages. Specifically, consumer lag for a given consumer group indicates the delay between the last message in the partition and the message being currently picked up by that consumer. The lag reflects the position of the consumer offset in relation to the end of the partition log. Consumer lag between the producer and consumer offset This difference is sometimes referred to as the delta between the producer offset and consumer offset: the read and write positions in the Kafka broker topic partitions. Suppose a topic streams 100 messages a second. A lag of 1000 messages between the producer offset (the topic partition head) and the last offset the consumer has read means a 10-second delay. The importance of monitoring consumer lag For applications that rely on the processing of (near) real-time data, it is critical to monitor consumer lag to check that it does not become too big. The greater the lag becomes, the further the process moves from the real-time processing objective. Consumer lag, for example, might be a result of consuming too much old data that has not been purged, or through unplanned shutdowns. Reducing consumer lag Use the Grafana charts to analyze lag and to check if actions to reduce lag are having an impact on an affected consumer group. If, for example, Kafka brokers are adjusted to reduce lag, the dashboard will show the Lag by consumer group chart going down and the Messages consumed per minute chart going up. Typical actions to reduce lag include: Scaling-up consumer groups by adding new consumers Increasing the retention time for a message to remain in a topic Adding more disk capacity to increase the message buffer Actions to reduce consumer lag depend on the underlying infrastructure and the use cases AMQ Streams is supporting. For instance, a lagging consumer is less likely to benefit from the broker being able to service a fetch request from its disk cache. And in certain cases, it might be acceptable to automatically drop messages until a consumer has caught up. 8.2. Monitoring Cruise Control operations Cruise Control monitors Kafka brokers in order to track the utilization of brokers, topics, and partitions. Cruise Control also provides a set of metrics for monitoring its own performance. The Cruise Control metrics reporter collects raw metrics data from Kafka brokers. The data is produced to topics that are automatically created by Cruise Control. The metrics are used to generate optimization proposals for Kafka clusters . Cruise Control metrics are available for real-time monitoring of Cruise Control operations. For example, you can use Cruise Control metrics to monitor the status of rebalancing operations that are running or provide alerts on any anomalies that are detected in an operation's performance. You expose Cruise Control metrics by enabling the Prometheus JMX Exporter in the Cruise Control configuration. Note For a full list of available Cruise Control metrics, which are known as sensors , see the Cruise Control documentation . 8.2.1. Exposing Cruise Control metrics If you want to expose metrics on Cruise Control operations, configure the Kafka resource to deploy Cruise Control and enable Prometheus metrics in the deployment . You can use your own configuration or use the example kafka-cruise-control-metrics.yaml file provided by AMQ Streams. You add the configuration to the metricsConfig of the CruiseControl property in the Kafka resource. The configuration enables the Prometheus JMX Exporter to expose Cruise Control metrics through an HTTP endpoint. The HTTP endpoint is scraped by the Prometheus server. Example metrics configuration for Cruise Control apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster Spec: # ... cruiseControl: # ... metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: cruise-control-metrics key: metrics-config.yml --- kind: ConfigMap apiVersion: v1 metadata: name: cruise-control-metrics labels: app: strimzi data: metrics-config.yml: | # metrics configuration... 8.2.2. Viewing Cruise Control metrics After you expose the Cruise Control metrics, you can use Prometheus or another suitable monitoring system to view information on the metrics data. AMQ Streams provides an example Grafana dashboard to display visualizations of Cruise Control metrics. The dashboard is a JSON file called strimzi-cruise-control.json . The exposed metrics provide the monitoring data when you enable the Grafana dashboard . 8.2.2.1. Monitoring balancedness scores Cruise Control metrics include a balancedness score. Balancedness is the measure of how evenly a workload is distributed in a Kafka cluster. The Cruise Control metric for balancedness score ( balancedness-score ) might differ from the balancedness score in the KafkaRebalance resource. Cruise Control calculates each score using anomaly.detection.goals which might not be the same as the default.goals used in the KafkaRebalance resource. The anomaly.detection.goals are specified in the spec.cruiseControl.config of the Kafka custom resource. Note Refreshing the KafkaRebalance resource fetches an optimization proposal. The latest cached optimization proposal is fetched if one of the following conditions applies: KafkaRebalance goals match the goals configured in the default.goals section of the Kafka resource KafkaRebalance goals are not specified Otherwise, Cruise Control generates a new optimization proposal based on KafkaRebalance goals . If new proposals are generated with each refresh, this can impact performance monitoring. 8.2.2.2. Alerts on anomaly detection Cruise control's anomaly detector provides metrics data for conditions that block the generation of optimization goals, such as broker failures. If you want more visibility, you can use the metrics provided by the anomaly detector to set up alerts and send out notifications. You can set up Cruise Control's anomaly notifier to route alerts based on these metrics through a specified notification channel. Alternatively, you can set up Prometheus to scrape the metrics data provided by the anomaly detector and generate alerts. Prometheus Alertmanager can then route the alerts generated by Prometheus. The Cruise Control documentation provides information on AnomalyDetector metrics and the anomaly notifier. 8.3. Example metrics files You can find example Grafana dashboards and other metrics configuration files in the example configuration files provided by AMQ Streams. Example metrics files provided with AMQ Streams 1 Example Grafana dashboards for the different AMQ Streams components. 2 Installation file for the Grafana image. 3 Additional configuration to scrape metrics for CPU, memory and disk volume usage, which comes directly from the OpenShift cAdvisor agent and kubelet on the nodes. 4 Hook definitions for sending notifications through Alertmanager. 5 Resources for deploying and configuring Alertmanager. 6 Alerting rules examples for use with Prometheus Alertmanager (deployed with Prometheus). 7 Installation resource file for the Prometheus image. 8 PodMonitor definitions translated by the Prometheus Operator into jobs for the Prometheus server to be able to scrape metrics data directly from pods. 9 Kafka Bridge resource with metrics enabled. 10 Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Connect. 11 Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Cruise Control. 12 Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka and ZooKeeper. 13 Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Mirror Maker 2.0. 8.3.1. Example Prometheus metrics configuration AMQ Streams uses the Prometheus JMX Exporter to expose metrics through an HTTP endpoint, which can be scraped by the Prometheus server. Grafana dashboards are dependent on Prometheus JMX Exporter relabeling rules, which are defined for AMQ Streams components in the custom resource configuration. A label is a name-value pair. Relabeling is the process of writing a label dynamically. For example, the value of a label may be derived from the name of a Kafka server and client ID. AMQ Streams provides example custom resource configuration YAML files with relabeling rules. When deploying Prometheus metrics configuration, you can can deploy the example custom resource or copy the metrics configuration to your own custom resource definition. Table 8.1. Example custom resources with metrics configuration Component Custom resource Example YAML file Kafka and ZooKeeper Kafka kafka-metrics.yaml Kafka Connect KafkaConnect kafka-connect-metrics.yaml Kafka MirrorMaker 2.0 KafkaMirrorMaker2 kafka-mirror-maker-2-metrics.yaml Kafka Bridge KafkaBridge kafka-bridge-metrics.yaml Cruise Control Kafka kafka-cruise-control-metrics.yaml 8.3.2. Example Prometheus rules for alert notifications Example Prometheus rules for alert notifications are provided with the example metrics configuration files provided by AMQ Streams. The rules are specified in the example prometheus-rules.yaml file for use in a Prometheus deployment . Alerting rules provide notifications about specific conditions observed in metrics. Rules are declared on the Prometheus server, but Prometheus Alertmanager is responsible for alert notifications. Prometheus alerting rules describe conditions using PromQL expressions that are continuously evaluated. When an alert expression becomes true, the condition is met and the Prometheus server sends alert data to the Alertmanager. Alertmanager then sends out a notification using the communication method configured for its deployment. General points about the alerting rule definitions: A for property is used with the rules to determine the period of time a condition must persist before an alert is triggered. A tick is a basic ZooKeeper time unit, which is measured in milliseconds and configured using the tickTime parameter of Kafka.spec.zookeeper.config . For example, if ZooKeeper tickTime=3000 , 3 ticks (3 x 3000) equals 9000 milliseconds. The availability of the ZookeeperRunningOutOfSpace metric and alert is dependent on the OpenShift configuration and storage implementation used. Storage implementations for certain platforms may not be able to supply the information on available space required for the metric to provide an alert. Alertmanager can be configured to use email, chat messages or other notification methods. Adapt the default configuration of the example rules according to your specific needs. 8.3.2.1. Example altering rules The prometheus-rules.yaml file contains example rules for the following components: Kafka ZooKeeper Entity Operator Kafka Connect Kafka Bridge MirrorMaker Kafka Exporter A description of each of the example rules is provided in the file. 8.3.3. Example Grafana dashboards If you deploy Prometheus to provide metrics, you can use the example Grafana dashboards provided with AMQ Streams to monitor AMQ Streams components. Example dashboards are provided in the examples/metrics/grafana-dashboards directory as JSON files. All dashboards provide JVM metrics, as well as metrics specific to the component. For example, the Grafana dashboard for AMQ Streams operators provides information on the number of reconciliations or custom resources they are processing. The example dashboards don't show all the metrics supported by Kafka. The dashboards are populated with a representative set of metrics for monitoring. Table 8.2. Example Grafana dashboard files Component Example JSON file AMQ Streams operators strimzi-operators.json Kafka strimzi-kafka.json ZooKeeper strimzi-zookeeper.json Kafka Connect strimzi-kafka-connect.json Kafka MirrorMaker 2.0 strimzi-kafka-mirror-maker-2.json Kafka Bridge strimzi-kafka-bridge.json Cruise Control strimzi-cruise-control.json Kafka Exporter strimzi-kafka-exporter.json 8.4. Deploying Prometheus metrics configuration Deploy Prometheus metrics configuration to use Prometheus with AMQ Streams. Use the metricsConfig property to enable and configure Prometheus metrics. You can use your own configuration or the example custom resource configuration files provided with AMQ Streams . kafka-metrics.yaml kafka-connect-metrics.yaml kafka-mirror-maker-2-metrics.yaml kafka-bridge-metrics.yaml kafka-cruise-control-metrics.yaml The example configuration files have relabeling rules and the configuration required to enable Prometheus metrics. Prometheus scrapes metrics from target HTTP endpoints. The example files are a good way to try Prometheus with AMQ Streams. To apply the relabeling rules and metrics configuration, do one of the following: Copy the example configuration to your own custom resources Deploy the custom resource with the metrics configuration If you want to include Kafka Exporter metrics, add kafkaExporter configuration to your Kafka resource. Important Kafka Exporter provides only additional metrics related to consumer lag and consumer offsets. For regular Kafka metrics, you have to configure the Prometheus metrics in Kafka brokers . This procedure shows how to deploy Prometheus metrics configuration in the Kafka resource. The process is the same when using the example files for other resources. Procedure Deploy the example custom resource with the Prometheus configuration. For example, for each Kafka resource you apply the kafka-metrics.yaml file. Deploying the example configuration oc apply -f kafka-metrics.yaml Alternatively, you can copy the example configuration in kafka-metrics.yaml to your own Kafka resource. Copying the example configuration oc edit kafka <kafka-configuration-file> Copy the metricsConfig property and the ConfigMap it references to your Kafka resource. Example metrics configuration for Kafka apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metricsConfig: 1 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key --- kind: ConfigMap 2 apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: kafka-metrics-config.yml: | # metrics configuration... 1 Copy the metricsConfig property that references the ConfigMap that contains metrics configuration. 2 Copy the whole ConfigMap that specifies the metrics configuration. Note For Kafka Bridge, you specify the enableMetrics property and set it to true . apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... bootstrapServers: my-cluster-kafka:9092 http: # ... enableMetrics: true # ... To deploy Kafka Exporter, add kafkaExporter configuration. kafkaExporter configuration is only specified in the Kafka resource. Example configuration for deploying Kafka Exporter apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... kafkaExporter: image: my-registry.io/my-org/my-exporter-cluster:latest 1 groupRegex: ".*" 2 topicRegex: ".*" 3 resources: 4 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: debug 5 enableSaramaLogging: true 6 template: 7 pod: metadata: labels: label1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 8 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5 # ... 1 ADVANCED OPTION: Container image configuration, which is recommended only in special situations . 2 A regular expression to specify the consumer groups to include in the metrics. 3 A regular expression to specify the topics to include in the metrics. 4 CPU and memory resources to reserve . 5 Logging configuration, to log messages with a given severity (debug, info, warn, error, fatal) or above. 6 Boolean to enable Sarama logging, a Go client library used by Kafka Exporter. 7 Customization of deployment templates and pods . 8 Healthcheck readiness probes . 9 Healthcheck liveness probes . Additional resources KafkaExporterTemplate schema reference metricsConfig schema reference 8.5. Viewing Kafka metrics and dashboards in OpenShift When AMQ Streams is deployed to OpenShift Container Platform, metrics are provided through monitoring for user-defined projects . This OpenShift feature gives developers access to a separate Prometheus instance for monitoring their own projects (for example, a Kafka project). If monitoring for user-defined projects is enabled, the openshift-user-workload-monitoring project contains the following components: A Prometheus Operator A Prometheus instance (automatically deployed by the Prometheus Operator) A Thanos Ruler instance AMQ Streams uses these components to consume metrics. A cluster administrator must enable monitoring for user-defined projects and then grant developers and other users permission to monitor applications within their own projects. Grafana deployment You can deploy a Grafana instance to the project containing your Kafka cluster. The example Grafana dashboards can then be used to visualize Prometheus metrics for AMQ Streams in the Grafana user interface. Important The openshift-monitoring project provides monitoring for core platform components. Do not use the Prometheus and Grafana components in this project to configure monitoring for AMQ Streams on OpenShift Container Platform 4.x. Procedure outline To set up AMQ Streams monitoring in OpenShift Container Platform, follow these procedures in order: Prerequisite: Deploy the Prometheus metrics configuration Deploy the Prometheus resources Create a service account for Grafana Deploy Grafana with a Prometheus datasource Create a Route to the Grafana Service Import the example Grafana dashboards 8.5.1. Prerequisites You have deployed the Prometheus metrics configuration using the example YAML files. Monitoring for user-defined projects is enabled. A cluster administrator has created a cluster-monitoring-config config map in your OpenShift cluster. A cluster administrator has assigned you a monitoring-rules-edit or monitoring-edit role. For more information on creating a cluster-monitoring-config config map and granting users permission to monitor user-defined projects, see OpenShift Container Platform Monitoring . 8.5.2. Additional resources OpenShift Container Platform Monitoring 8.5.3. Deploying the Prometheus resources Use Prometheus to obtain monitoring data in your Kafka cluster. You can use your own Prometheus deployment or deploy Prometheus using the example metrics configuration files provided by AMQ Streams. To use the example files, you configure and deploy the PodMonitor resources. The PodMonitors scrape data directly from pods for Apache Kafka, ZooKeeper, Operators, the Kafka Bridge, and Cruise Control. Then, you deploy the example alerting rules for Alertmanager. Prerequisites A running Kafka cluster. Check the example alerting rules provided with AMQ Streams. Procedure Check that monitoring for user-defined projects is enabled: oc get pods -n openshift-user-workload-monitoring If enabled, pods for the monitoring components are returned. For example: NAME READY STATUS RESTARTS AGE prometheus-operator-5cc59f9bc6-kgcq8 1/1 Running 0 25s prometheus-user-workload-0 5/5 Running 1 14s prometheus-user-workload-1 5/5 Running 1 14s thanos-ruler-user-workload-0 3/3 Running 0 14s thanos-ruler-user-workload-1 3/3 Running 0 14s If no pods are returned, monitoring for user-defined projects is disabled. See the Prerequisites in Section 8.5, "Viewing Kafka metrics and dashboards in OpenShift" . Multiple PodMonitor resources are defined in examples/metrics/prometheus-install/strimzi-pod-monitor.yaml . For each PodMonitor resource, edit the spec.namespaceSelector.matchNames property: apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: cluster-operator-metrics labels: app: strimzi spec: selector: matchLabels: strimzi.io/kind: cluster-operator namespaceSelector: matchNames: - <project-name> 1 podMetricsEndpoints: - path: /metrics port: http # ... 1 The project where the pods to scrape the metrics from are running, for example, Kafka . Deploy the strimzi-pod-monitor.yaml file to the project where your Kafka cluster is running: oc apply -f strimzi-pod-monitor.yaml -n MY-PROJECT Deploy the example Prometheus rules to the same project: oc apply -f prometheus-rules.yaml -n MY-PROJECT 8.5.4. Creating a service account for Grafana A Grafana instance for AMQ Streams needs to run with a service account that is assigned the cluster-monitoring-view role. Create a service account if you are using Grafana to present metrics for monitoring. Prerequisites Deploy the Prometheus resources Procedure Create a ServiceAccount for Grafana. Here the resource is named grafana-serviceaccount . apiVersion: v1 kind: ServiceAccount metadata: name: grafana-serviceaccount labels: app: strimzi Deploy the ServiceAccount to the project containing your Kafka cluster: oc apply -f GRAFANA-SERVICEACCOUNT -n MY-PROJECT Create a ClusterRoleBinding resource that assigns the cluster-monitoring-view role to the Grafana ServiceAccount . Here the resource is named grafana-cluster-monitoring-binding . apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-cluster-monitoring-binding labels: app: strimzi subjects: - kind: ServiceAccount name: grafana-serviceaccount namespace: <my-project> 1 roleRef: kind: ClusterRole name: cluster-monitoring-view apiGroup: rbac.authorization.k8s.io 1 Name of your project. Deploy the ClusterRoleBinding to the project containing your Kafka cluster: oc apply -f <grafana-cluster-monitoring-binding> -n <my-project> 8.5.5. Deploying Grafana with a Prometheus datasource Deploy Grafana to present Prometheus metrics. A Grafana application requires configuration for the OpenShift Container Platform monitoring stack. OpenShift Container Platform includes a Thanos Querier instance in the openshift-monitoring project. Thanos Querier is used to aggregate platform metrics. To consume the required platform metrics, your Grafana instance requires a Prometheus data source that can connect to Thanos Querier. To configure this connection, you create a config map that authenticates, by using a token, to the oauth-proxy sidecar that runs alongside Thanos Querier. A datasource.yaml file is used as the source of the config map. Finally, you deploy the Grafana application with the config map mounted as a volume to the project containing your Kafka cluster. Prerequisites Deploy the Prometheus resources Create a service account for Grafana Procedure Get the access token of the Grafana ServiceAccount : oc serviceaccounts get-token grafana-serviceaccount -n MY-PROJECT Copy the access token to use in the step. Create a datasource.yaml file containing the Thanos Querier configuration for Grafana. Paste the access token into the httpHeaderValue1 property as indicated. apiVersion: 1 datasources: - name: Prometheus type: prometheus url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 access: proxy basicAuth: false withCredentials: false isDefault: true jsonData: timeInterval: 5s tlsSkipVerify: true httpHeaderName1: "Authorization" secureJsonData: httpHeaderValue1: "Bearer USD{ GRAFANA-ACCESS-TOKEN }" 1 editable: true 1 GRAFANA-ACCESS-TOKEN : The value of the access token for the Grafana ServiceAccount . Create a config map named grafana-config from the datasource.yaml file: oc create configmap grafana-config --from-file=datasource.yaml -n MY-PROJECT Create a Grafana application consisting of a Deployment and a Service . The grafana-config config map is mounted as a volume for the datasource configuration. apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: grafana template: metadata: labels: name: grafana spec: serviceAccountName: grafana-serviceaccount containers: - name: grafana image: grafana/grafana:7.5.15 ports: - name: grafana containerPort: 3000 protocol: TCP volumeMounts: - name: grafana-data mountPath: /var/lib/grafana - name: grafana-logs mountPath: /var/log/grafana - name: grafana-config mountPath: /etc/grafana/provisioning/datasources/datasource.yaml readOnly: true subPath: datasource.yaml readinessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 15 periodSeconds: 20 volumes: - name: grafana-data emptyDir: {} - name: grafana-logs emptyDir: {} - name: grafana-config configMap: name: grafana-config --- apiVersion: v1 kind: Service metadata: name: grafana labels: app: strimzi spec: ports: - name: grafana port: 3000 targetPort: 3000 protocol: TCP selector: name: grafana type: ClusterIP Deploy the Grafana application to the project containing your Kafka cluster: oc apply -f <grafana-application> -n <my-project> 8.5.6. Creating a route to the Grafana Service You can access the Grafana user interface through a Route that exposes the Grafana service. Prerequisites Deploy the Prometheus resources Create a service account for Grafana Deploy Grafana with a Prometheus datasource Procedure Create an edge route to the grafana service: oc create route edge <my-grafana-route> --service=grafana --namespace= KAFKA-NAMESPACE 8.5.7. Importing the example Grafana dashboards Use Grafana to provide visualizations of Prometheus metrics on customizable dashboards. AMQ Streams provides example dashboard configuration files for Grafana in JSON format. examples/metrics/grafana-dashboards This procedure uses the example Grafana dashboards. The example dashboards are a good starting point for monitoring key metrics, but they don't show all the metrics supported by Kafka. You can modify the example dashboards or add other metrics, depending on your infrastructure. Prerequisites Deploy the Prometheus resources Create a service account for Grafana Deploy Grafana with a Prometheus datasource Create a Route to the Grafana Service Procedure Get the details of the Route to the Grafana Service. For example: oc get routes NAME HOST/PORT PATH SERVICES MY-GRAFANA-ROUTE MY-GRAFANA-ROUTE-amq-streams.net grafana In a web browser, access the Grafana login screen using the URL for the Route host and port. Enter your user name and password, and then click Log In . The default Grafana user name and password are both admin . After logging in for the first time, you can change the password. In Configuration > Data Sources , check that the Prometheus data source was created. The data source was created in Section 8.5.5, "Deploying Grafana with a Prometheus datasource" . Click the + icon and then click Import . In examples/metrics/grafana-dashboards , copy the JSON of the dashboard to import. Paste the JSON into the text box, and then click Load . Repeat steps 5-7 for the other example Grafana dashboards. The imported Grafana dashboards are available to view from the Dashboards home page.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster Spec: # cruiseControl: # metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: cruise-control-metrics key: metrics-config.yml --- kind: ConfigMap apiVersion: v1 metadata: name: cruise-control-metrics labels: app: strimzi data: metrics-config.yml: | # metrics configuration", "metrics ├── grafana-dashboards 1 │ ├── strimzi-cruise-control.json │ ├── strimzi-kafka-bridge.json │ ├── strimzi-kafka-connect.json │ ├── strimzi-kafka-exporter.json │ ├── strimzi-kafka-mirror-maker-2.json │ ├── strimzi-kafka.json │ ├── strimzi-operators.json │ └── strimzi-zookeeper.json ├── grafana-install │ └── grafana.yaml 2 ├── prometheus-additional-properties │ └── prometheus-additional.yaml 3 ├── prometheus-alertmanager-config │ └── alert-manager-config.yaml 4 ├── prometheus-install │ ├── alert-manager.yaml 5 │ ├── prometheus-rules.yaml 6 │ ├── prometheus.yaml 7 │ ├── strimzi-pod-monitor.yaml 8 ├── kafka-bridge-metrics.yaml 9 ├── kafka-connect-metrics.yaml 10 ├── kafka-cruise-control-metrics.yaml 11 ├── kafka-metrics.yaml 12 └── kafka-mirror-maker-2-metrics.yaml 13", "apply -f kafka-metrics.yaml", "edit kafka <kafka-configuration-file>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # metricsConfig: 1 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key --- kind: ConfigMap 2 apiVersion: v1 metadata: name: kafka-metrics labels: app: strimzi data: kafka-metrics-config.yml: | # metrics configuration", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # bootstrapServers: my-cluster-kafka:9092 http: # enableMetrics: true #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # kafkaExporter: image: my-registry.io/my-org/my-exporter-cluster:latest 1 groupRegex: \".*\" 2 topicRegex: \".*\" 3 resources: 4 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: debug 5 enableSaramaLogging: true 6 template: 7 pod: metadata: labels: label1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 8 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5", "get pods -n openshift-user-workload-monitoring", "NAME READY STATUS RESTARTS AGE prometheus-operator-5cc59f9bc6-kgcq8 1/1 Running 0 25s prometheus-user-workload-0 5/5 Running 1 14s prometheus-user-workload-1 5/5 Running 1 14s thanos-ruler-user-workload-0 3/3 Running 0 14s thanos-ruler-user-workload-1 3/3 Running 0 14s", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: cluster-operator-metrics labels: app: strimzi spec: selector: matchLabels: strimzi.io/kind: cluster-operator namespaceSelector: matchNames: - <project-name> 1 podMetricsEndpoints: - path: /metrics port: http", "apply -f strimzi-pod-monitor.yaml -n MY-PROJECT", "apply -f prometheus-rules.yaml -n MY-PROJECT", "apiVersion: v1 kind: ServiceAccount metadata: name: grafana-serviceaccount labels: app: strimzi", "apply -f GRAFANA-SERVICEACCOUNT -n MY-PROJECT", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-cluster-monitoring-binding labels: app: strimzi subjects: - kind: ServiceAccount name: grafana-serviceaccount namespace: <my-project> 1 roleRef: kind: ClusterRole name: cluster-monitoring-view apiGroup: rbac.authorization.k8s.io", "apply -f <grafana-cluster-monitoring-binding> -n <my-project>", "serviceaccounts get-token grafana-serviceaccount -n MY-PROJECT", "apiVersion: 1 datasources: - name: Prometheus type: prometheus url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 access: proxy basicAuth: false withCredentials: false isDefault: true jsonData: timeInterval: 5s tlsSkipVerify: true httpHeaderName1: \"Authorization\" secureJsonData: httpHeaderValue1: \"Bearer USD{ GRAFANA-ACCESS-TOKEN }\" 1 editable: true", "create configmap grafana-config --from-file=datasource.yaml -n MY-PROJECT", "apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: grafana template: metadata: labels: name: grafana spec: serviceAccountName: grafana-serviceaccount containers: - name: grafana image: grafana/grafana:7.5.15 ports: - name: grafana containerPort: 3000 protocol: TCP volumeMounts: - name: grafana-data mountPath: /var/lib/grafana - name: grafana-logs mountPath: /var/log/grafana - name: grafana-config mountPath: /etc/grafana/provisioning/datasources/datasource.yaml readOnly: true subPath: datasource.yaml readinessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 15 periodSeconds: 20 volumes: - name: grafana-data emptyDir: {} - name: grafana-logs emptyDir: {} - name: grafana-config configMap: name: grafana-config --- apiVersion: v1 kind: Service metadata: name: grafana labels: app: strimzi spec: ports: - name: grafana port: 3000 targetPort: 3000 protocol: TCP selector: name: grafana type: ClusterIP", "apply -f <grafana-application> -n <my-project>", "create route edge <my-grafana-route> --service=grafana --namespace= KAFKA-NAMESPACE", "get routes NAME HOST/PORT PATH SERVICES MY-GRAFANA-ROUTE MY-GRAFANA-ROUTE-amq-streams.net grafana" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/deploying_and_upgrading_amq_streams_on_openshift/assembly-metrics-setup-str
Chapter 7. Building and provisioning a minimal raw image
Chapter 7. Building and provisioning a minimal raw image The minimal-raw image is a pre-packaged, bootable, minimal RPM image, compressed in the xz format. The image consists of a file containing a partition layout with an existing deployed OSTree commit in it. You can build a RHEL for Edge Minimal Raw image type by using RHEL image builder and deploy the Minimal Raw image to the aarch64 and x86 architectures. 7.1. The minimal raw image build and deployment Build a RHEL for Edge Minimal Raw image by using the minimal-raw image type. To boot the image, you must decompress it and copy to any bootable device, such as an SD card or a USB flash drive. You can log in to the deployed system with the user name and password that you specified in the blueprint that you used to create the RHEL for Edge Minimal Raw image. Composing and deploying a RHEL for Edge Minimal Raw image involves the following high-level steps: Install and register a RHEL system Install RHEL image builder Using RHEL image builder, create a blueprint with your customizations for RHEL for Edge Minimal Raw image Import the RHEL for Edge blueprint in RHEL image builder Create a RHEL for Edge Minimal Raw image Download and decompress the RHEL for Edge Minimal Raw image Create a bootable USB drive from the decompressed Raw image Deploy the RHEL for Edge Minimal Raw image 7.2. Creating the blueprint for a Minimal Raw image by using RHEL image builder CLI Create a blueprint, and customize it with a username and a password. You can use the resulting blueprint to create a Minimal Raw image and log in to it by using the credentials that you configured in the blueprint. Procedure Create a plain text file in the Tom's Obvious, Minimal Language (TOML) format, with the following content: name is the name and description is the description for your blueprint. 0.0.1 is the version number according to the Semantic Versioning scheme. Modules describe the package name and matching version glob to be installed into the image, for example, the package name = "tmux" and the matching version glob is version = "2.9a". Currently there are no differences between packages and modules. Groups are packages groups to be installed into the image, for example the anaconda-tools group package. If you do not know the modules and groups, leave them empty. Under customizations.user : name is the username to login to the image password is a password of your choice groups are any user groups, such as "widget" Import the blueprint to the RHEL image builder server: Check if the blueprint is available on the system: Check the validity of components, versions, and their dependencies in the blueprint: Additional resources Composing a RHEL for Edge image by using RHEL image builder command-line 7.3. Creating a Minimal Raw image by using RHEL image builder CLI Create a RHEL for Edge Minimal Raw image with the RHEL image builder command-line interface. Prerequisites You created a blueprint for the RHEL for Edge Minimal Raw image. Procedure Build the image. <blueprint_name> is the RHEL for Edge blueprint name minimal-raw is the image type Check the image compose status. The output displays the status in the following format: Additional resources Composing a RHEL for Edge image using image builder command-line 7.4. Downloading and decompressing the Minimal Raw image Download the RHEL for Edge Minimal Raw image by using RHEL image builder command-line interface, and then decompress the image to be able to boot it. Prerequisites You have created a RHEL for Edge Minimal Raw image. Procedure Review the RHEL for Edge Minimal Raw image compose status. The output must display the following details: Download the image: Image builder downloads the image into your working directory. The following output is an example: Decompress the image: Use the decompressed bootable RHEL for Edge Minimal Raw image to create a bootable installation medium and use it as a boot device. The following documentation describes the procedure of creating an USB bootable device from an ISO image. However, the same steps apply to the RAW images, because the RAW image is equivalent to the ISO image. See Creating a bootable USB device on Linux for more details. 7.5. Deploying the Minimal Raw image from a USB flash drive After you created a bootable USB installation medium from the customized RHEL for Edge Minimal Raw image, you can continue the installation process by deploying the Minimal Raw image from the USB flash drive and booting your customized image. Prerequisites You have a 8 GB USB flash drive. You have created a bootable installation medium from the RHEL for Edge Minimal Raw image to the USB drive. Procedure Connect the USB flash drive to the computer where you want to boot your customized image. Power on the system. Boot the RHEL for Edge Minimal Raw image from the USB flash drive. The boot menu shows you the following options: Choose Install Red Hat Enterprise Linux 9 . This starts the system installation. Verification Boot into the image by using the username and password you configured in the blueprint. Check the release: List the block devices in the system: 7.6. Serving a RHEL for Edge Container image to build a RHEL for Edge Raw image Create a RHEL for Edge Container image to serve it to a running container. Prerequisites You have created a RHEL for Edge Minimal Raw image and downloaded it. Procedure Create a blueprint for the rhel-edge-container image type, for example: Build a rhel-edge-container image: Check if the image is ready: Download the rhel-edge-container image as a .tar file: Import the RHEL for Edge Container into Podman: Start the container and make it available by using the port 8080: Create a blueprint for the edge-raw-image image type, for example: Build a RHEL for Edge Raw image by serving the RHEL Edge Container to it: Download the RHEL for Edge Raw image as a .raw file: Decompress the RHEL for Edge Raw image:
[ "name = \"minimal-raw-blueprint\" description = \"blueprint for the Minimal Raw image\" version = \"0.0.1\" packages = [] modules = [] groups = [] distro = \"\" [[customizations.user]] name = \"admin\" password = \"admin\" groups = [\"users\", \"wheel\"]", "composer-cli blueprints push <blueprint_name> .toml", "composer-cli blueprints list", "composer-cli blueprints depsolve <blueprint_name>", "composer-cli compose start <blueprint_name> minimal-raw", "composer-cli compose status", "<UUID> RUNNING date <blueprint_name> blueprint-version minimal-raw", "composer-cli compose status", "<UUID> FINISHED date <blueprint_name> <blueprint_version> minimal-raw", "composer-cli compose image <UUID>", "3f9223c1-6ddb-4915-92fe-9e0869b8e209-raw.img.xz", "xz -d <UUID> -raw.img.xz", "Install Red Hat Enterprise Linux 9 Test this media & install Red Hat Enterprise Linux 9", "cat /etc/os-release", "lsblk", "name = \"rhel-edge-container-no-users\" description = \"\" version = \"0.0.1\"", "composer-cli compose start-ostree <rhel-edge-container-no-users> rhel-edge-container", "composer-cli compose status", "composer-cli compose image <UUID>", "skopeo copy oci-archive:_<UUID>_-container.tar containers-storage:localhost/rfe-93-mirror:latest", "podman run -d --rm --name <rfe-93-mirror> -p 8080:8080 localhost/ <rfe-93-mirror>", "name = \" <edge-raw> \" description = \"\" version = \"0.0.1\" [[customizations.user]] name = \"admin\" password = \"admin\" groups = [\"wheel\"]", "composer-cli compose start-ostree edge-raw edge-raw-image --url http://10.88.0.1:8080/repo", "composer-cli compose image <UUID>", "xz --decompress <UUID> >-image.raw.xz" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_installing_and_managing_rhel_for_edge_images/building-and-provisioning-a-minimal-raw-image_composing-installing-managing-rhel-for-edge-images
B.2. Fencing Configuration
B.2. Fencing Configuration You must configure a fencing device for each node in the cluster. For general information about configuring fencing devices, see Chapter 4, Fencing: Configuring STONITH . Note When configuring a fencing device, you should ensure that your fencing device does not share power with the node that it controls. This example uses the APC power switch with a host name of zapc.example.com to fence the nodes, and it uses the fence_apc_snmp fencing agent. Because both nodes will be fenced by the same fencing agent, you can configure both fencing devices as a single resource, using the pcmk_host_map and pcmk_host_list options. You create a fencing device by configuring the device as a stonith resource with the pcs stonith create command. The following command configures a stonith resource named myapc that uses the fence_apc_snmp fencing agent for nodes z1.example.com and z2.example.com . The pcmk_host_map option maps z1.example.com to port 1, and z2.example.com to port 2. The login value and password for the APC device are both apc . By default, this device will use a monitor interval of 60s for each node. Note that you can use an IP address when specifying the host name for the nodes. Note When you create a fence_apc_snmp stonith device, you may see the following warning message, which you can safely ignore: The following command displays the parameters of an existing STONITH device.
[ "pcs stonith create myapc fence_apc_snmp ipaddr=\"zapc.example.com\" pcmk_host_map=\"z1.example.com:1;z2.example.com:2\" pcmk_host_check=\"static-list\" pcmk_host_list=\"z1.example.com,z2.example.com\" login=\"apc\" passwd=\"apc\"", "Warning: missing required option(s): 'port, action' for resource type: stonith:fence_apc_snmp", "pcs stonith show myapc Resource: myapc (class=stonith type=fence_apc_snmp) Attributes: ipaddr=zapc.example.com pcmk_host_map=z1.example.com:1;z2.example.com:2 pcmk_host_check=static-list pcmk_host_list=z1.example.com,z2.example.com login=apc passwd=apc Operations: monitor interval=60s (myapc-monitor-interval-60s)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fenceconfig-HAAA
1.5. Listing Installed Software Collections
1.5. Listing Installed Software Collections To get a list of Software Collections that are installed on the system, run the following command: scl --list To get a list of installed packages contained within a specified Software Collection, run the following command: scl --list software_collection_1
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-Listing_Installed_Software_Collections
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.79.0_toolset/making-open-source-more-inclusive
Chapter 5. RuntimeClass [node.k8s.io/v1]
Chapter 5. RuntimeClass [node.k8s.io/v1] Description RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see https://kubernetes.io/docs/concepts/containers/runtime-class/ Type object Required handler 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources handler string handler specifies the underlying runtime and configuration that the CRI implementation will use to handle pods of this class. The possible values are specific to the node & CRI configuration. It is assumed that all handlers are available on every node, and handlers of the same name are equivalent on every node. For example, a handler called "runc" might specify that the runc OCI runtime (using native Linux containers) will be used to run the containers in a pod. The Handler must be lowercase, conform to the DNS Label (RFC 1123) requirements, and is immutable. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata overhead object Overhead structure represents the resource overhead associated with running a pod. scheduling object Scheduling specifies the scheduling constraints for nodes supporting a RuntimeClass. 5.1.1. .overhead Description Overhead structure represents the resource overhead associated with running a pod. Type object Property Type Description podFixed object (Quantity) podFixed represents the fixed resource overhead associated with running a pod. 5.1.2. .scheduling Description Scheduling specifies the scheduling constraints for nodes supporting a RuntimeClass. Type object Property Type Description nodeSelector object (string) nodeSelector lists labels that must be present on nodes that support this RuntimeClass. Pods using this RuntimeClass can only be scheduled to a node matched by this selector. The RuntimeClass nodeSelector is merged with a pod's existing nodeSelector. Any conflicts will cause the pod to be rejected in admission. tolerations array (Toleration) tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission, effectively unioning the set of nodes tolerated by the pod and the RuntimeClass. 5.2. API endpoints The following API endpoints are available: /apis/node.k8s.io/v1/runtimeclasses DELETE : delete collection of RuntimeClass GET : list or watch objects of kind RuntimeClass POST : create a RuntimeClass /apis/node.k8s.io/v1/watch/runtimeclasses GET : watch individual changes to a list of RuntimeClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/node.k8s.io/v1/runtimeclasses/{name} DELETE : delete a RuntimeClass GET : read the specified RuntimeClass PATCH : partially update the specified RuntimeClass PUT : replace the specified RuntimeClass /apis/node.k8s.io/v1/watch/runtimeclasses/{name} GET : watch changes to an object of kind RuntimeClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/node.k8s.io/v1/runtimeclasses HTTP method DELETE Description delete collection of RuntimeClass Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RuntimeClass Table 5.3. HTTP responses HTTP code Reponse body 200 - OK RuntimeClassList schema 401 - Unauthorized Empty HTTP method POST Description create a RuntimeClass Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body RuntimeClass schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK RuntimeClass schema 201 - Created RuntimeClass schema 202 - Accepted RuntimeClass schema 401 - Unauthorized Empty 5.2.2. /apis/node.k8s.io/v1/watch/runtimeclasses HTTP method GET Description watch individual changes to a list of RuntimeClass. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/node.k8s.io/v1/runtimeclasses/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the RuntimeClass HTTP method DELETE Description delete a RuntimeClass Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RuntimeClass Table 5.11. HTTP responses HTTP code Reponse body 200 - OK RuntimeClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RuntimeClass Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK RuntimeClass schema 201 - Created RuntimeClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RuntimeClass Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body RuntimeClass schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK RuntimeClass schema 201 - Created RuntimeClass schema 401 - Unauthorized Empty 5.2.4. /apis/node.k8s.io/v1/watch/runtimeclasses/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the RuntimeClass HTTP method GET Description watch changes to an object of kind RuntimeClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/node_apis/runtimeclass-node-k8s-io-v1
Chapter 2. Installing and configuring Instance HA
Chapter 2. Installing and configuring Instance HA You use Red Hat OpenStack Platform (RHOSP) director to deploy Instance High Availability (HA). However, you must perform additional steps to configure a new Instance HA deployment on a new overcloud. After you complete the steps, Instance HA will run on a subset of Compute nodes with a custom role. Important Instance HA is not supported on RHOSP hyperconverged infrastructures (HCI) environments. To use Instance HA in your RHOSP HCI environment, you must designate a subset of the Compute nodes with the ComputeInstanceHA role to use the Instance HA. Red Hat Ceph Storage services must not be hosted on the Compute nodes that host Instance HA. Important To enable instance HA in a different environment, such as an existing overcloud that uses standard or custom roles, perform only the procedures that are relevant to your deployment and adapt your templates accordingly. 2.1. Configuring the Instance HA role and profile Before you deploy Instance HA, add the Instance HA role to your roles-data.yaml file, tag each Compute node that you want to manage with Instance HA with the Instance HA profile, and add these to your overcloud-baremetal-deploy.yaml file or equivalent. For more information about designating overcloud nodes for specific roles, see: Designating overcloud nodes for roles by matching profiles . As an example, you can use the computeiha profile to configure the node. Procedure Add the role to your overcloud-baremetal-deploy.yaml file, if not already defined. Edit overcloud-baremetal-deploy.yaml to define the profile that you want to assign to the nodes for the role: Provision the overcloud nodes: Replace <stack> with the name of the stack for which you provisioned the bare-metal nodes. The default value is overcloud . Replace <deployment_file> with a name that you choose for the generated heat environment file to include with the deployment command, for example /home/stack/templates/overcloud-baremetal-deployed.yaml . 2.2. Enabling fencing on an overcloud with Instance HA Enable fencing on all Controller and Compute nodes in the overcloud by creating an environment file with fencing information. Procedure Create the environment file in an accessible location, such as ~/templates , and include the following content: If you use shared storage for your Compute instance, set the following parameter in your environment file to false : Additional resources Section 1.2, "Planning your Instance HA deployment" Fencing Controller Nodes with STONITH 2.3. Deploying the overcloud with Instance HA If you already deployed the overcloud, you can run the openstack overcloud deploy command again with the additional Instance HA files you created. You can configure Instance HA for your overcloud at any time after you create the undercloud. Prerequisites You configured a Instance HA role and profile. You enabled fencing on the overcloud. Procedure Use the openstack overcloud deploy command with the -e option to include the compute-instanceha.yaml environment file and to include additional environment files. Replace <fencing_environment_file> with the appropriate file names for your environment: Note Do not modify the compute-instanceha.yaml environment file. Include the full path to each environment file that you want to include in the overcloud deployment. After deployment, each Compute node includes a STONITH device and a pacemaker_remote service. 2.4. Testing Instance HA evacuation To test that Instance HA evacuates instances correctly, you trigger evacuation on a Compute node and check that the Instance HA agents successfully evacuate and re-create the instance on a different Compute node. Warning The following procedure involves deliberately crashing a Compute node, which triggers the automated evacuation of instances with Instance HA. Prerequisites Instance HA is deployed on the Compute node. Procedure Start one or more instances on the overcloud. Log in to the Compute node that hosts the instances and change to the root user. Replace compute-n with the name of the Compute node: Crash the Compute node. Wait a few minutes for the node to restart, and then verify that the instances from the Compute node that you crash are re-created on another Compute node: 2.5. Designating instances to evacuate with Instance HA By default, Instance HA evacuates all instances from a failed node. You can configure Instance HA to only evacuate instances with specific images or flavors. Prerequisites Instance HA is deployed on the overcloud. Procedure Log in to the undercloud as the stack user. Source the overcloudrc file: Use one of the following options: Tag an image: Replace <image_id> with the ID of the image that you want to evacuate. Tag a flavor: Replace <flavor_id> with the ID of the flavor that you want to evacuate. If you are using host aggregates, then add the same tag or property to the host aggregates. For more information, see Flavor metadata in Configuring the Compute service for instance creation. 2.6. Additional resources Installing and managing Red Hat OpenStack Platform with director Composable services and custom roles
[ "- name: ComputeInstanceHA count: 2 hostname_format: compute-%index% defaults: network_config: template: /home/stack/composable_roles/network/nic-configs/compute.j2 networks: - network: ctlplane vif: true - network: internal_api - network: tenant - network: storage instances: - hostname: overcloud-novacompute-0 name: node04 - hostname: overcloud-novacompute-1 name: node05", "(undercloud)USD openstack overcloud node provision --stack <stack> --output <deployment_file> /home/stack/templates/overcloud-baremetal-deploy.yaml", "parameter_defaults: EnableFencing: true FencingConfig: devices: - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:c7\" params: login: admin ipaddr: 192.168.24.1 ipport: 6230 passwd: password lanplus: 1 - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:cb\" params: login: admin ipaddr: 192.168.24.1 ipport: 6231 passwd: password lanplus: 1 - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:cf\" params: login: admin ipaddr: 192.168.24.1 ipport: 6232 passwd: password lanplus: 1 - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:d3\" params: login: admin ipaddr: 192.168.24.1 ipport: 6233 passwd: password lanplus: 1 - agent: fence_ipmilan host_mac: \"00:ec:ad:cb:3c:d7\" params: login: admin ipaddr: 192.168.24.1 ipport: 6234 passwd: password lanplus: 1", "parameter_defaults: ExtraConfig: tripleo::instanceha::no_shared_storage: false", "openstack overcloud deploy --templates -e <fencing_environment_file> -r my_roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/compute-instanceha.yaml", "stack@director USD . overcloudrc stack@director USD openstack server create --image cirros --flavor 2 test-failover stack@director USD openstack server list -c Name -c Status", "stack@director USD . stackrc stack@director USD ssh -l tripleo-admin compute-n tripleo-admin@ compute-n USD su -", "root@ compute-n USD echo c > /proc/sysrq-trigger", "stack@director USD openstack server list -c Name -c Status stack@director USD openstack compute service list", "source ~/overcloudrc", "(overcloud) USD openstack image set --tag evacuable <image_id>", "(overcloud) USD openstack flavor set --property evacuable=true <flavor_id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_high_availability_for_instances/assembly_installing-configuring-instanceha_rhosp
function::kernel_string_quoted
function::kernel_string_quoted Name function::kernel_string_quoted - Retrieves and quotes string from kernel memory Synopsis Arguments addr the kernel memory address to retrieve the string from Description Returns the null terminated C string from a given kernel memory address where any ASCII characters that are not printable are replaced by the corresponding escape sequence in the returned string. Note that the string will be surrounded by double quotes. If the kernel memory data is not accessible at the given address, the address itself is returned as a string, without double quotes.
[ "kernel_string_quoted:string(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-kernel-string-quoted
13.3. Certificate Management in Email Clients
13.3. Certificate Management in Email Clients The following example shows how to manage certificates in the Mozilla Thunderbird email client. It represents a procedure to set up certificates in email clients in general. In Mozilla Thunderbird, open the Thunderbird main menu and select Preferences Account Settings . Select the Security item, and click View Certificates to open the Certificate Manager . Figure 13.7. Account Settings in Thunderbird To import a CA certificate: Download and save the CA certificate to your computer. In the Certificate Manager , choose the Authorities tab and click Import . Figure 13.8. Importing the CA Certificate in Thunderbird Select the downloaded CA certificate. To set the certificate trust relationships: In the Certificate Manager , under the Authorities tab, select the appropriate certificate and click Edit Trust . Edit the certificate trust settings. Figure 13.9. Editing the Certificate Trust Settings in Thunderbird To use a personal certificate for authentication: In the Certificate Manager , under the Your Certificates tab, click Import . Figure 13.10. Importing a Personal Certificate for Authentication in Thunderbird Select the required certificate from your computer. Close the Certificate Manager and return to the Security item in Account Settings . Under the Digital Signing section of the form, click Select to choose your personal certificate to use for signing messages. Under Encryption , click Select to choose your personal certificate to encrypt and decrypt messages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/smime_applications
8.3. ORBit2
8.3. ORBit2 8.3.1. RHBA-2014:1563 - ORBit2 bug fix update Updated ORBit2 packages that fix one bug are now available for Red Hat Enterprise Linux 6. The ORBit2 packages provide a high-performance Object Request Broker (ORB) for the Common Object Request Broker Architecture (CORBA). ORBit allows programs to send requests and receive replies from other programs, regardless of where the programs are located. CORBA is a standard that enables communication between program objects, regardless of the programming language and platform used. Bug Fix BZ# 784223 Due to improper synchronization between multiple threads when accessing shared data objects, the bonobo-activation-server process was in certain cases killed by the SIGSEGV signal. With this update, clean-up tasks on one thread have been deferred, so synchronization is no longer necessary. As a result, the process does not crash anymore in the described scenario. Users of ORBit2 are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/orbit2
Chapter 2. Metrics file locations
Chapter 2. Metrics file locations Reporting metrics to Red Hat is a requirement. Logging metrics for your automation jobs is automatically enabled when you install Ansible SDK. You cannot disable it. Every time an automation job runs, a new tarball is created. You are responsible for scraping the data from the storage location and for monitoring the size of the directory. You can customize the metrics storage location for each Python file that runs a playbook, or you can use the default location. 2.1. Default location for metrics files When you install Ansible SDK, the default metrics storage location is set to the ~/.ansible/metrics directory. After an automation job is complete, the metrics are written to a tarball in the directory. Ansible SDK creates the directory if it does not already exist. 2.2. Customizing the metrics storage location You can specify the path to the directory to store your metrics files in the Python file that runs your playbook. You can set a different directory path for every Python automation job file, or you can store the tarballs for multiple jobs in one directory. If you do not set the path in a Python file, the tarballs for the jobs that it runs will be saved in the default directory ( ~/.ansible/metrics ). Procedure Decide on a location on your file system to store the metrics data. Ensure that the location is readable and writable. Ansible SDK creates the directory if it does not already exist. In the job_options in the main() function of your Python file, set the metrics_output_path parameter to the directory where the tarballs are to be stored. In the following example, the metrics files are stored in the /tmp/metrics directory after the pb.yml playbook has been executed: async def main(): executor = AnsibleSubprocessJobExecutor() executor_options = AnsibleSubprocessJobOptions() job_options = { 'playbook': 'pb.yml', # Change the default job-related data path 'metrics_output_path': '/tmp/metrics', } 2.3. Viewing metrics files After an automation job has completed, navigate to the directory that you specified for storing the data and list the files. The data for the newly-completed job is contained in a tarball file whose name begins with the date and time that the automation job was run. For example, the following file records data for an automation job executed on 8 March 2023 at 2.30AM. USD ls 2023_03_08_02_30_24__aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_job_data.tar.gz To view the files in the tarball, run tar xvf . USD tar xvf 2023_03_08_02_30_24__aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_job_data.tar.gz x jobs.csv x modules.csv x collections.csv x roles.csv x playbook_on_stats.csv The following example shows the jobs.csv file. USD cat jobs.csv job_id,job_type,started,finished,job_state,hosts_ok,hosts_changed,hosts_skipped,hosts_failed,hosts_unreachable,task_count,task_duration 84896567-a586-4215-a914-7503010ef281,local,2023-03-08 02:30:22.440045,2023-03-08 02:30:24.316458,,5,0,0,0,0,2,0:00:01.876413 When a parameter value is not available, the corresponding entry in the CSV file is empty. In the jobs.csv file above, the job_state value is not available.
[ "async def main(): executor = AnsibleSubprocessJobExecutor() executor_options = AnsibleSubprocessJobOptions() job_options = { 'playbook': 'pb.yml', # Change the default job-related data path 'metrics_output_path': '/tmp/metrics', }", "ls 2023_03_08_02_30_24__aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_job_data.tar.gz", "tar xvf 2023_03_08_02_30_24__aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa_job_data.tar.gz x jobs.csv x modules.csv x collections.csv x roles.csv x playbook_on_stats.csv", "cat jobs.csv job_id,job_type,started,finished,job_state,hosts_ok,hosts_changed,hosts_skipped,hosts_failed,hosts_unreachable,task_count,task_duration 84896567-a586-4215-a914-7503010ef281,local,2023-03-08 02:30:22.440045,2023-03-08 02:30:24.316458,,5,0,0,0,0,2,0:00:01.876413" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_inside/1.3/html/red_hat_ansible_inside_reporting_guide/reporting-file-location
Chapter 2. Creating the first administrator
Chapter 2. Creating the first administrator After installing Red Hat build of Keycloak, you need an administrator account that can act as a super admin with full permissions to manage Red Hat build of Keycloak. With this account, you can log in to the Red Hat build of Keycloak Admin Console where you create realms and users and register applications that are secured by Red Hat build of Keycloak. 2.1. Creating the account on the local host If your server is accessible from localhost , perform these steps. Procedure In a web browser, go to the http://localhost:8080 URL. Supply a username and password that you can recall. Welcome page 2.2. Creating the account remotely If you cannot access the server from a localhost address or just want to start Red Hat build of Keycloak from the command line, use the KC_BOOTSTRAP_ADMIN_USERNAME and KC_BOOTSTRAP_ADMIN_PASSWORD environment variables to create an initial admin account. For example: export KC_BOOTSTRAP_ADMIN_USERNAME=<username> export KC_BOOTSTRAP_ADMIN_PASSWORD=<password> bin/kc.[sh|bat] start
[ "export KC_BOOTSTRAP_ADMIN_USERNAME=<username> export KC_BOOTSTRAP_ADMIN_PASSWORD=<password> bin/kc.[sh|bat] start" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/creating-first-admin_server_administration_guide
Chapter 2. Support
Chapter 2. Support Only the configuration options described in this documentation are supported for the logging subsystem. Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged . An unmanaged OpenShift Logging environment is not supported and does not receive updates until you return OpenShift Logging to Managed . Note The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. The logging subsystem for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. The logging subsystem for Red Hat OpenShift is not: A high scale log collection system Security Information and Event Monitoring (SIEM) compliant Historical or long term log retention or storage A guaranteed log sink Secure storage - audit logs are not stored by default
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/support
Chapter 3. Configuring a webhook integration with the Hybrid Cloud Console
Chapter 3. Configuring a webhook integration with the Hybrid Cloud Console For third-party applications that do not have a native Hybrid Cloud Console integration available, you can use the webhook integration type to receive event notifications from the Hybrid Cloud Console in your chosen application. Webhook integrations support events from all Hybrid Cloud Console services. Third-party applications can be configured to allow inbound data requests by exposing webhooks and using them to listen for incoming events. The Hybrid Cloud Console integrations service uses this functionality to send events and associated data from each service. You can configure the Hybrid Cloud Console notifications service to send POST messages to those third-party application webhook endpoints. For example, you can configure the Hybrid Cloud Console to automatically forward events triggered when a new Advisor recommendation is found. The event and its data are sent as an HTTP POST message to the third-party application on its incoming webhook endpoint. After you configure the endpoints in the notifications service, you can subscribe to a stream of Hybrid Cloud Console events and automatically forward that stream to the webhooks of your choice. Each event contains additional metadata, which you can use to process the event, for example, to perform specific actions or trigger responses, as part of your operational workflow. You configure the implementation and data handling within your application. Contacting support If you have any issues with integrating the Hybrid Cloud Console with webhooks, contact Red Hat for support. You can open a Red Hat support case directly from the Hybrid Cloud Console by clicking Help ( ? icon) > Open a support case , or view more options from ? > Support options . 3.1. Configuring a webhook integration with the Hybrid Cloud Console For third-party applications that do not have a native Hybrid Cloud Console integration available, configure a webhook integration to receive event notifications from the Hybrid Cloud Console in your chosen application. Prerequisites You have configured a webhook in your third-party application. You have Organization Administrator or Notifications administrator permissions for the Hybrid Cloud Console. Procedure In the Hybrid Cloud Console, navigate to Settings > Integrations . Click the Webhooks tab. Click Add integration . In the Integration name field, enter a name for your integration. Paste the webhook URL from your third-party application into the Endpoint URL field. Optional: Enter a Secret token if one is configured. Note A secret token is essential for protecting the data sent to the integration endpoint and should always be used when integrating the Hybrid Cloud Console with third-party applications. Click . Optional: Associate events with the integration. Doing this automatically creates a behavior group. Note You can skip this step and associate the event types later. Select a product family, for example OpenShift , Red Hat Enterprise Linux , or Console . Select the event types you would like your integration to react to. To enable the integration, review the integration details and click Submit . Refresh the Integrations page to show the webhook integration in the Integrations > Webhooks list. Under Last connection attempt , the status is Ready to show the connection can accept notifications from the Hybrid Cloud Console. Verification Create a test notification to confirm you have correctly connected your application to the Hybrid Cloud Console: to your integration on the Integrations > Webhooks page, click the options icon (...) and then click Test . In the Integration Test screen, enter a message and click Send . If you leave the field empty, the Hybrid Cloud Console sends a default message. Open your third-party application and check for the message sent from the Hybrid Cloud Console. In the Hybrid Cloud Console, go to Notifications > Event Log and check that the Integration: Webhook event is listed with a green label. Additional resources For more information about setting up Notifications administrator permissions, see Configure User Access to manage notifications in the notifications documentation. 3.2. Creating the behavior group for a webhook integration A behavior group defines which notifications will be sent to external services when a specific event is received by the notifications service. You can link events from any Red Hat Hybrid Cloud Console service to your behavior group. For more information about behavior groups, see Configuring Hybrid Cloud Console notification behavior groups . Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. A webhook integration is configured. For configuration steps, see Section 3.1, "Configuring a webhook integration with the Hybrid Cloud Console" . Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications . Under Notifications , select Configure Events . Select the application bundle tab you want to configure event notification behavior for: Red Hat Enterprise Linux , Console , or OpenShift . Click the Behavior Groups tab. Click Create new group to open the Create behavior group wizard. Type a name for the behavior group and click . In the Actions and Recipients step, select Integration: Webhook from the Actions drop-down list. From the Recipient drop-down list, select the name of the webhook integration you created and click . In the Associate event types step, select one or more events for which you want to send notifications (for example, Policies: Policy triggered ) and click . Review your behavior group settings and click Finish . The new behavior group appears on the Notifications > Configure Events page in the Behavior Groups tab. Verification Create an event that will trigger a Hybrid Cloud Console notification. For example, run insights-client on a system that will trigger a policy event. Wait a few minutes, and then navigate to your third-party application to check for a notification. In the Hybrid Cloud Console, go to Settings > Notifications > Event Log and check for an event that shows the label Integration: Webhook . If the label is green, the notification succeeded. If the label is red, verify that the incoming webhook connector was properly created in your application, and that the correct incoming webhook URL is added in the Hybrid Cloud Console integration configuration. Note See Troubleshooting notification failures in the notifications documentation for more details. 3.3. Additional resources For more information about Hybrid Cloud Console notifications methods, see Configuring notifications on the Red Hat Hybrid Cloud Console . For information about troubleshooting your integration, see Troubleshooting Hybrid Cloud Console integrations . For webhook configuration examples, see these articles in the Red Hat blog: Exploring Red Hat Insights integration with Jira Software Configuring Hybrid Cloud Console to forward notifications events to a Jira webhook trigger Integrate Red Hat Insights into your existing operational workflow
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/assembly-configuring-integration-with-webhooks_integrating-communications
Chapter 6. Custom image builds with Buildah
Chapter 6. Custom image builds with Buildah With OpenShift Container Platform 4.17, a docker socket will not be present on the host nodes. This means the mount docker socket option of a custom build is not guaranteed to provide an accessible docker socket for use within a custom build image. If you require this capability in order to build and push images, add the Buildah tool your custom build image and use it to build and push the image within your custom build logic. The following is an example of how to run custom builds with Buildah. Note Using the custom build strategy requires permissions that normal users do not have by default because it allows the user to execute arbitrary code inside a privileged container running on the cluster. This level of access can be used to compromise the cluster and therefore should be granted only to users who are trusted with administrative privileges on the cluster. 6.1. Prerequisites Review how to grant custom build permissions . 6.2. Creating custom build artifacts You must create the image you want to use as your custom build image. Procedure Starting with an empty directory, create a file named Dockerfile with the following content: FROM registry.redhat.io/rhel8/buildah # In this example, `/tmp/build` contains the inputs that build when this # custom builder image is run. Normally the custom builder image fetches # this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh # /usr/bin/build.sh contains the actual custom build logic that will be run when # this custom builder image is run. ENTRYPOINT ["/usr/bin/build.sh"] In the same directory, create a file named dockerfile.sample . This file is included in the custom build image and defines the image that is produced by the custom build: FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build In the same directory, create a file named build.sh . This file contains the logic that is run when the custom build runs: #!/bin/sh # Note that in this case the build inputs are part of the custom builder image, but normally this # is retrieved from an external source. cd /tmp/input # OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom # build framework TAG="USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}" # performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . # buildah requires a slight modification to the push secret provided by the service # account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo "{ \"auths\": " ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo "}") > /tmp/.dockercfg # push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG} 6.3. Build custom builder image You can use OpenShift Container Platform to build and push custom builder images to use in a custom strategy. Prerequisites Define all the inputs that will go into creating your new custom builder image. Procedure Define a BuildConfig object that will build your custom builder image: USD oc new-build --binary --strategy=docker --name custom-builder-image From the directory in which you created your custom build image, run the build: USD oc start-build custom-builder-image --from-dir . -F After the build completes, your new custom builder image is available in your project in an image stream tag that is named custom-builder-image:latest . 6.4. Use custom builder image You can define a BuildConfig object that uses the custom strategy in conjunction with your custom builder image to execute your custom build logic. Prerequisites Define all the required inputs for new custom builder image. Build your custom builder image. Procedure Create a file named buildconfig.yaml . This file defines the BuildConfig object that is created in your project and executed: kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest 1 Specify your project name. Create the BuildConfig object by entering the following command: USD oc create -f buildconfig.yaml Create a file named imagestream.yaml . This file defines the image stream to which the build will push the image: kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {} Create the image stream by entering the following command: USD oc create -f imagestream.yaml Run your custom build by entering the following command: USD oc start-build sample-custom-build -F When the build runs, it launches a pod running the custom builder image that was built earlier. The pod runs the build.sh logic that is defined as the entrypoint for the custom builder image. The build.sh logic invokes Buildah to build the dockerfile.sample that was embedded in the custom builder image, and then uses Buildah to push the new image to the sample-custom image stream .
[ "FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]", "FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build", "#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}", "oc new-build --binary --strategy=docker --name custom-builder-image", "oc start-build custom-builder-image --from-dir . -F", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest", "oc create -f buildconfig.yaml", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}", "oc create -f imagestream.yaml", "oc start-build sample-custom-build -F" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/builds_using_buildconfig/custom-builds-buildah
4.132. libesmtp
4.132. libesmtp 4.132.1. RHEA-2011:1775 - libesmtp enhancement update An updated libesmtp package that adds one enhancement is now available for Red Hat Enterprise Linux 6. LibESMTP is a library to manage posting or submitting electronic mail using SMTP to a preconfigured Mail Transport Agent (MTA). The libesmtp package is required by Open MPI. Enhancement BZ# 738760 Previously, LibESMTP was not shipped with Red Hat Enterprise Linux 6 on the 64-bit PowerPC platform. This update adds the LibESMTP package to the 64-bit PowerPC variant, as a requirement of the updated OpenMPI. Note, that this update does not contain any changes for other architectures. All users requiring libesmtp on the 64-bit PowerPC architecture are advised to install this package, which adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libesmtp