title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 2. Configuring Ansible automation hub remote repositories to sync content
|
Chapter 2. Configuring Ansible automation hub remote repositories to sync content You can configure your private automation hub to synchronize with Ansible Certified Content Collections hosted in your organization repository on console.redhat.com or to your choice of collections in Ansible Galaxy. 2.1. Reasons to configure remote repositories By configuring remote repositories, you can set your private automation hub to synchronize Red Hat Certified Collections hosted in your organization's repository on console.redhat.com and your choice of collections in Ansible Galaxy. Each remote repository located in Repo Management Remote provides information for both the community and rh-certified repository about when the repository was last updated and when content was last synced . You can add new content to Ansible automation hub at any time using the Edit and Sync features included on the Repo Management Remote page. 2.2. Retrieving the Sync URL and API token for your Red Hat Certified Collection You can synchronize Ansible Certified Content Collections curated by your organization from console.redhat.com to your private automation hub. Prerequisites You have organization administrator permissions to create the synclist on console.redhat.com. Procedure Log in to console.redhat.com as an organization administrator. Navigate to Automation Hub Repo Management . Locate the Sync URL and click the Copy to clipboard icon ( ). Paste the Sync URL in a file to use when configuring the rh-certified remote. Click the More actions icon and click Get token . On the Token management page, click Load token . Click Copy to clipboard to copy the API token. Paste the API token into a file and store in a secure location. Important The API token is a secret token used to protect your content. Store your API token in a secure location. 2.3. Configuring the rh-certified remote repository and synchronizing Red Hat Ansible Certified Content Collection. You can edit the rh-certified remote repository to synchronize collections from automation hub hosted on console.redhat.com to your private automation hub. By default, your private automation hub rh-certified repository includes the URL for the entire group of Ansible Certified Content Collections. To use only those collections specified by your organization, you must include a unique URL. Prerequisites You have valid Modify Ansible repo content permissions. See Managing user access in Automation Hub for more information on permissions. You have retrieved the Sync URL and API Token from the automation hub hosted service on console.redhat.com. You have configured access to port 443. This is required for synchronizing certified collections. For more information, see the automation hub table in the Network ports and protocols chapter of the Red Hat Ansible Automation Platform Planning Guide. Procedure Log in to your private automation hub. Navigate to Repo Management . Click the Remotes tab. In the rh-certified remote repository, click and click Edit . In the modal, paste the Sync URL and Token you acquired from console.redhat.com. Click Save . The modal closes and returns you to the Repo Management page. You can now synchronize collections between your organization synclist on console.redhat.com and your private automation hub. Click Sync to synchronize collections. The Sync status notification updates to notify you of completion of the Red Hat Certified Content Collections synchronization. Verification Select Red Hat Certified from the collections content drop-down list to confirm that your collections content has synchronized successfully. 2.4. Configuring the community remote repository and syncing Ansible Galaxy collections You can edit the community remote repository to synchronize chosen collections from Ansible Galaxy to your private automation hub. By default, your private automation hub community repository directs to galaxy.ansible.com/api/ . Prerequisites You have Modify Ansible repo content permissions. See Managing user access in Automation Hub for more information on permissions. You have a requirements.yml file that identifies those collections to synchronize from Ansible Galaxy as in the following example: Requirements.yml example Procedure Log in to your Ansible automation hub. Navigate to Repo Management . Click the Remotes tab. In the Community remote, click the More Actions icon and click Edit . In the modal, click Browse and locate the requirements.yml file on your local machine. Click Save . The modal closes and returns you to the Repo Management page. You can now synchronize collections identified in your requirements.yml file from Ansible Galaxy to your private automation hub. Click Sync to sync collections from Ansible Galaxy and Ansible automation hub. The Sync status notification updates to notify you of completion or failure of Ansible Galaxy collections synchronization to your Ansible automation hub. Verification Select Community from the collections content drop-down list to confirm successful synchronization.
|
[
"collections: # Install a collection from {Galaxy}. - name: community.aws version: 5.2.0 source: https://galaxy.ansible.com"
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_red_hat_certified_and_ansible_galaxy_collections_in_automation_hub/assembly-creating-tokens-in-automation-hub
|
20.24. Displaying CPU Statistics for a Specified Guest Virtual Machine
|
20.24. Displaying CPU Statistics for a Specified Guest Virtual Machine The virsh cpu-stats domain --total start count command provides the CPU statistical information on the specified guest virtual machine. By default, it shows the statistics for all CPUs, as well as a total. The --total option will only display the total statistics. The --count option will only display statistics for count CPUs. Example 20.51. How to generate CPU statistics for the guest virtual machine The following example generates CPU statistics for the guest virtual machine named guest1 .
|
[
"virsh cpu-stats guest1 CPU0: cpu_time 242.054322158 seconds vcpu_time 110.969228362 seconds CPU1: cpu_time 170.450478364 seconds vcpu_time 106.889510980 seconds CPU2: cpu_time 332.899774780 seconds vcpu_time 192.059921774 seconds CPU3: cpu_time 163.451025019 seconds vcpu_time 88.008556137 seconds Total: cpu_time 908.855600321 seconds user_time 22.110000000 seconds system_time 35.830000000 seconds"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-editing_a_guest_virtual_machines_configuration_file-displaying_cpu_statistics_for_a_specified_domain
|
Chapter 11. Integrating Red Hat Quay into OpenShift Container Platform with the Quay Bridge Operator
|
Chapter 11. Integrating Red Hat Quay into OpenShift Container Platform with the Quay Bridge Operator The Quay Bridge Operator duplicates the features of the integrated OpenShift Container Platform registry into the new Red Hat Quay registry. Using the Quay Bridge Operator, you can replace the integrated container registry in OpenShift Container Platform with a Red Hat Quay registry. The features enabled with the Quay Bridge Operator include: Synchronizing OpenShift Container Platform namespaces as Red Hat Quay organizations. Creating robot accounts for each default namespace service account. Creating secrets for each created robot account, and associating each robot secret to a service account as Mountable and Image Pull Secret . Synchronizing OpenShift Container Platform image streams as Red Hat Quay repositories. Automatically rewriting new builds making use of image streams to output to Red Hat Quay. Automatically importing an image stream tag after a build completes. By using the following procedures, you can enable bi-directional communication between your Red Hat Quay and OpenShift Container Platform clusters. 11.1. Setting up Red Hat Quay for the Quay Bridge Operator In this procedure, you will create a dedicated Red Hat Quay organization, and from a new application created within that organization you will generate an OAuth token to be used with the Quay Bridge Operator in OpenShift Container Platform. Procedure Log in to Red Hat Quay through the web UI. Select the organization for which the external application will be configured. On the navigation pane, select Applications . Select Create New Application and enter a name for the new application, for example, openshift . On the OAuth Applications page, select your application, for example, openshift . On the navigation pane, select Generate Token . Select the following fields: Administer Organization Administer Repositories Create Repositories View all visible repositories Read/Write to any accessible repositories Administer User Read User Information Review the assigned permissions. Select Authorize Application and then confirm confirm the authorization by selecting Authorize Application . Save the generated access token. Important Red Hat Quay does not offer token management. You cannot list tokens, delete tokens, or modify tokens. The generated access token is only shown once and cannot be re-obtained after closing the page. 11.2. Installing the Quay Bridge Operator on OpenShift Container Platform In this procedure, you will install the Quay Bridge Operator on OpenShift Container Platform. Prerequiites You have set up Red Hat Quay and obtained an Access Token. An OpenShift Container Platform 4.6 or greater environment for which you have cluster administrator permissions. Procedure Open the Administrator perspective of the web console and navigate to Operators OperatorHub on the navigation pane. Search for Quay Bridge Operator , click the Quay Bridge Operator title, and then click Install . Select the version to install, for example, stable-3.7 , and then click Install . Click View Operator when the installation finishes to go to the Quay Bridge Operator's Details page. Alternatively, you can click Installed Operators Red Hat Quay Bridge Operator to go to the Details page. 11.3. Creating an OpenShift Container Platform secret for the OAuth token In this procedure, you will add the previously obtained access token to communicate with your Red Hat Quay deployment. The access token will be stored within OpenShift Container Platform as a secret. Prerequisites You have set up Red Hat Quay and obtained an access token. You have deployed the Quay Bridge Operator on OpenShift Container Platform. An OpenShift Container Platform 4.6 or greater environment for which you have cluster administrator permissions. You have installed the OpenShift CLI (oc). Procedure Create a secret that contains the access token in the openshift-operators namespace: USD oc create secret -n openshift-operators generic <secret-name> --from-literal=token=<access_token> 11.4. Creating the QuayIntegration custom resource In this procedure, you will create a QuayIntegration custom resource, which can be completed from either the web console or from the command line. Prerequisites You have set up Red Hat Quay and obtained an access token. You have deployed the Quay Bridge Operator on OpenShift Container Platform. An OpenShift Container Platform 4.6 or greater environment for which you have cluster administrator permissions. Optional: You have installed the OpenShift CLI (oc). 11.4.1. Optional: Creating the QuayIntegration custom resource using the CLI Follow this procedure to create the QuayIntegration custom resource using the command line. Procedure Create a quay-integration.yaml : Use the following configuration for a minimal deployment of the QuayIntegration custom resource: apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration spec: clusterID: openshift 1 credentialsSecret: namespace: openshift-operators name: quay-integration 2 quayHostname: https://<QUAY_URL> 3 insecureRegistry: false 4 1 The clusterID value should be unique across the entire ecosystem. This value is required and defaults to openshift . 2 The credentialsSecret property refers to the namespace and name of the secret containing the token that was previously created. 3 Replace the QUAY_URL with the hostname of your Red Hat Quay instance. 4 If Red Hat Quay is using self signed certificates, set the property to insecureRegistry: true . For a list of all configuration fields, see "QuayIntegration configuration fields". Create the QuayIntegration custom resource: 11.4.2. Optional: Creating the QuayIntegration custom resource using the web console Follow this procedure to create the QuayIntegration custom resource using the web console. Procedure Open the Administrator perspective of the web console and navigate to Operators Installed Operators . Click Red Hat Quay Bridge Operator . On the Details page of the Quay Bridge Operator, click Create Instance on the Quay Integration API card. On the Create QuayIntegration page, enter the following required information in either Form view or YAML view : Name : The name that will refer to the QuayIntegration custom resource object. Cluster ID : The ID associated with this cluster. This value should be unique across the entire ecosystem. Defaults to openshift if left unspecified. Credentials secret : Refers to the namespace and name of the secret containing the token that was previously created. Quay hostname : The hostname of the Quay registry. For a list of all configuration fields, see " QuayIntegration configuration fields ". After the QuayIntegration custom resource is created, your OpenShift Container Platform cluster will be linked to your Red Hat Quay instance. Organizations within your Red Hat Quay registry should be created for the related namespace for the OpenShift Container Platform environment. 11.5. Using Quay Bridge Operator Use the following procedure to use the Quay Bridge Operator. Prerequisites You have installed the Red Hat Quay Operator. You have logged into OpenShift Container Platform as a cluster administrator. You have logged into your Red Hat Quay registry. You have installed the Quay Bridge Operator. You have configured the QuayIntegration custom resource. Procedure Enter the following command to create a new OpenShift Container Platform project called e2e-demo : USD oc new-project e2e-demo After you have created a new project, a new Organization is created in Red Hat Quay. Navigate to the Red Hat Quay registry and confirm that you have created a new Organization named openshift_e2e-demo . Note The openshift value of the Organization might different if the clusterID in your QuayIntegration resource used a different value. On the Red Hat Quay UI, click the name of the new Organization, for example, openshift_e2e-demo . Click Robot Accounts in the navigation pane. As part of new project, the following Robot Accounts should have been created: openshift_e2e-demo+deployer openshift_e2e-demo+default openshift_e2e-demo+builder Enter the following command to confirm three secrets containing Docker configuration associated with the applicable Robot Accounts were created: USD oc get secrets builder-quay-openshift deployer-quay-openshift default-quay-openshift Example output stevsmit@stevsmit ocp-quay USD oc get secrets builder-quay-openshift deployer-quay-openshift default-quay-openshift NAME TYPE DATA AGE builder-quay-openshift kubernetes.io/dockerconfigjson 1 77m deployer-quay-openshift kubernetes.io/dockerconfigjson 1 77m default-quay-openshift kubernetes.io/dockerconfigjson 1 77m Enter the following command to display detailed information about builder ServiceAccount (SA), including its secrets, token expiration, and associated roles and role bindings. This ensures that the project is integrated via the Quay Bridge Operator. USD oc describe sa builder default deployer Example output ... Name: builder Namespace: e2e-demo Labels: <none> Annotations: <none> Image pull secrets: builder-dockercfg-12345 builder-quay-openshift Mountable secrets: builder-dockercfg-12345 builder-quay-openshift Tokens: builder-token-12345 Events: <none> ... Enter the following command to create and deploy a new application called httpd-template : USD oc new-app --template=httpd-example Example output --> Deploying template "e2e-demo/httpd-example" to project e2e-demo ... --> Creating resources ... service "httpd-example" created route.route.openshift.io "httpd-example" created imagestream.image.openshift.io "httpd-example" created buildconfig.build.openshift.io "httpd-example" created deploymentconfig.apps.openshift.io "httpd-example" created --> Success Access your application via route 'httpd-example-e2e-demo.apps.quay-ocp.gcp.quaydev.org' Build scheduled, use 'oc logs -f buildconfig/httpd-example' to track its progress. Run 'oc status' to view your app. After running this command, BuildConfig , ImageStream , Service, Route , and DeploymentConfig resources are created. When the ImageStream resource is created, an associated repository is created in Red Hat Quay. For example: The ImageChangeTrigger for the BuildConfig triggers a new Build when the Apache HTTPD image, located in the openshift namespace, is resolved. As the new Build is created, the MutatingWebhookConfiguration automatically rewriters the output to point at Red Hat Quay. You can confirm that the build is complete by querying the output field of the build by running the following command: USD oc get build httpd-example-1 --template='{{ .spec.output.to.name }}' Example output example-registry-quay-quay-enterprise.apps.quay-ocp.gcp.quaydev.org/openshift_e2e-demo/httpd-example:latest On the Red Hat Quay UI, navigate to the openshift_e2e-demo Organization and select the httpd-example repository. Click Tags in the navigation pane and confirm that the latest tag has been successfully pushed. Enter the following command to ensure that the latest tag has been resolved: USD oc describe is httpd-example Example output Name: httpd-example Namespace: e2e-demo Created: 55 minutes ago Labels: app=httpd-example template=httpd-example Description: Keeps track of changes in the application image Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/image.dockerRepositoryCheck=2023-10-02T17:56:45Z Image Repository: image-registry.openshift-image-registry.svc:5000/e2e-demo/httpd-example Image Lookup: local=false Unique Images: 0 Tags: 1 latest tagged from example-registry-quay-quay-enterprise.apps.quay-ocp.gcp.quaydev.org/openshift_e2e-demo/httpd-example:latest After the ImageStream is resolwillved, a new deployment should have been triggered. Enter the following command to generate a URL output: USD oc get route httpd-example --template='{{ .spec.host }}' Example output httpd-example-e2e-demo.apps.quay-ocp.gcp.quaydev.org Navigate to the URL. If a sample webpage appears, the deployment was successful. Enter the following command to delete the resources and clean up your Red Hat Quay repository: USD oc delete project e2e-demo Note The command waits until the project resources have been removed. This can be bypassed by adding the --wait=false to the above command After the command completes, navigate to your Red Hat Quay repository and confirm that the openshift_e2e-demo Organization is no longer available. Additional resources Best practices dictate that all communication between a client and an image registry be facilitated through secure means. Communication should leverage HTTPS/TLS with a certificate trust between the parties. While Red Hat Quay can be configured to serve an insecure configuration, proper certificates should be utilized on the server and configured on the client. Follow the OpenShift Container Platform documentation for adding and managing certificates at the container runtime level.
|
[
"oc create secret -n openshift-operators generic <secret-name> --from-literal=token=<access_token>",
"touch quay-integration.yaml",
"apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration spec: clusterID: openshift 1 credentialsSecret: namespace: openshift-operators name: quay-integration 2 quayHostname: https://<QUAY_URL> 3 insecureRegistry: false 4",
"oc create -f quay-integration.yaml",
"oc new-project e2e-demo",
"oc get secrets builder-quay-openshift deployer-quay-openshift default-quay-openshift",
"stevsmit@stevsmit ocp-quay USD oc get secrets builder-quay-openshift deployer-quay-openshift default-quay-openshift NAME TYPE DATA AGE builder-quay-openshift kubernetes.io/dockerconfigjson 1 77m deployer-quay-openshift kubernetes.io/dockerconfigjson 1 77m default-quay-openshift kubernetes.io/dockerconfigjson 1 77m",
"oc describe sa builder default deployer",
"Name: builder Namespace: e2e-demo Labels: <none> Annotations: <none> Image pull secrets: builder-dockercfg-12345 builder-quay-openshift Mountable secrets: builder-dockercfg-12345 builder-quay-openshift Tokens: builder-token-12345 Events: <none>",
"oc new-app --template=httpd-example",
"--> Deploying template \"e2e-demo/httpd-example\" to project e2e-demo --> Creating resources service \"httpd-example\" created route.route.openshift.io \"httpd-example\" created imagestream.image.openshift.io \"httpd-example\" created buildconfig.build.openshift.io \"httpd-example\" created deploymentconfig.apps.openshift.io \"httpd-example\" created --> Success Access your application via route 'httpd-example-e2e-demo.apps.quay-ocp.gcp.quaydev.org' Build scheduled, use 'oc logs -f buildconfig/httpd-example' to track its progress. Run 'oc status' to view your app.",
"oc get build httpd-example-1 --template='{{ .spec.output.to.name }}'",
"example-registry-quay-quay-enterprise.apps.quay-ocp.gcp.quaydev.org/openshift_e2e-demo/httpd-example:latest",
"oc describe is httpd-example",
"Name: httpd-example Namespace: e2e-demo Created: 55 minutes ago Labels: app=httpd-example template=httpd-example Description: Keeps track of changes in the application image Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/image.dockerRepositoryCheck=2023-10-02T17:56:45Z Image Repository: image-registry.openshift-image-registry.svc:5000/e2e-demo/httpd-example Image Lookup: local=false Unique Images: 0 Tags: 1 latest tagged from example-registry-quay-quay-enterprise.apps.quay-ocp.gcp.quaydev.org/openshift_e2e-demo/httpd-example:latest",
"oc get route httpd-example --template='{{ .spec.host }}'",
"httpd-example-e2e-demo.apps.quay-ocp.gcp.quaydev.org",
"oc delete project e2e-demo"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_operator_features/quay-bridge-operator
|
Chapter 1. Configuring accounts for RHEL AI
|
Chapter 1. Configuring accounts for RHEL AI There are a few accounts you need to set up before interacting with RHEL AI. Creating a Red Hat account You can create a Red Hat account by registering on the Red Hat website. You can follow the procedure in Register for a Red Hat account . Creating a Red Hat registry account Before you can download models from the Red Hat registry, you need to create a registry account and login using the CLI. You can view your account username and password by selecting the Regenerate Token button on the webpage. You can create a Red Hat registry account by selecting the New Service Account button on the Registry Service Accounts page. There are several ways you can log into your registry account via the CLI. Follow the procedure in Red Hat Container Registry authentication to login on your machine. Configuring Red Hat Insights for hybrid cloud deployments Red Hat Insights is an offering that gives you visibility to the environments you are deploying. This platform can also help identify operational and vulnerability risks in your system. For more information about Red Hat Insights, see Red Hat Insights data and application security . You can create a Red Hat Insights account using an activation key and organization parameters by following the procedure in Viewing an activation key . You can then configure your account on your machine by running the following command: USD rhc connect --organization <org id> --activation-key <created key> To run RHEL AI in a disconnected environment, or opt out of Red Hat Insights, run the following commands: USD sudo mkdir -p /etc/ilab USD sudo touch /etc/ilab/insights-opt-out
|
[
"rhc connect --organization <org id> --activation-key <created key>",
"sudo mkdir -p /etc/ilab sudo touch /etc/ilab/insights-opt-out"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/building_your_rhel_ai_environment/setting_up_accounts
|
Chapter 4. Monitoring distributed workloads
|
Chapter 4. Monitoring distributed workloads In OpenShift AI, you can view project metrics for distributed workloads, and view the status of all distributed workloads in the selected project. You can use these metrics to monitor the resources used by distributed workloads, assess whether project resources are allocated correctly, track the progress of distributed workloads, and identify corrective action when necessary. Note Data science pipelines workloads are not managed by the distributed workloads feature, and are not included in the distributed workloads metrics. 4.1. Viewing project metrics for distributed workloads In OpenShift AI, you can view the following project metrics for distributed workloads: CPU - The number of CPU cores that are currently being used by all distributed workloads in the selected project. Memory - The amount of memory in gibibytes (GiB) that is currently being used by all distributed workloads in the selected project. You can use these metrics to monitor the resources used by the distributed workloads, and assess whether project resources are allocated correctly. Prerequisites You have installed Red Hat OpenShift AI. On the OpenShift cluster where OpenShift AI is installed, user workload monitoring is enabled. You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. Your data science project contains distributed workloads. Procedure In the OpenShift AI left navigation pane, click Distributed Workloads Metrics . From the Project list, select the project that contains the distributed workloads that you want to monitor. Click the Project metrics tab. Optional: From the Refresh interval list, select a value to specify how frequently the graphs on the metrics page are refreshed to show the latest data. You can select one of these values: 15 seconds , 30 seconds , 1 minute , 5 minutes , 15 minutes , 30 minutes , 1 hour , 2 hours , or 1 day . In the Requested resources section, review the CPU and Memory graphs to identify the resources requested by distributed workloads as follows: Requested by the selected project Requested by all projects, including the selected project and projects that you cannot access Total shared quota for all projects, as provided by the cluster queue For each resource type ( CPU and Memory ), subtract the Requested by all projects value from the Total shared quota value to calculate how much of that resource quota has not been requested and is available for all projects. Scroll down to the Top resource-consuming distributed workloads section to review the following graphs: Top 5 distributed workloads that are consuming the most CPU resources Top 5 distributed workloads that are consuming the most memory You can also identify how much CPU or memory is used in each case. Scroll down to view the Distributed workload resource metrics table, which lists all of the distributed workloads in the selected project, and indicates the current resource usage and the status of each distributed workload. In each table entry, progress bars indicate how much of the requested CPU and memory is currently being used by this distributed workload. To see numeric values for the actual usage and requested usage for CPU (measured in cores) and memory (measured in GiB), hover the cursor over each progress bar. Compare the actual usage with the requested usage to assess the distributed workload configuration. If necessary, reconfigure the distributed workload to reduce or increase the requested resources. Verification On the Project metrics tab, the graphs and table provide resource-usage data for the distributed workloads in the selected project. 4.2. Viewing the status of distributed workloads In OpenShift AI, you can view the status of all distributed workloads in the selected project. You can track the progress of the distributed workloads, and identify corrective action when necessary. Prerequisites You have installed Red Hat OpenShift AI. On the OpenShift cluster where OpenShift AI is installed, user workload monitoring is enabled. You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. Your data science project contains distributed workloads. Procedure In the OpenShift AI left navigation pane, click Distributed Workloads Metrics . From the Project list, select the project that contains the distributed workloads that you want to monitor. Click the Distributed workload status tab. Optional: From the Refresh interval list, select a value to specify how frequently the graphs on the metrics page are refreshed to show the latest data. You can select one of these values: 15 seconds , 30 seconds , 1 minute , 5 minutes , 15 minutes , 30 minutes , 1 hour , 2 hours , or 1 day . In the Status overview section, review a summary of the status of all distributed workloads in the selected project. The status can be Pending , Inadmissible , Admitted , Running , Evicted , Succeeded , or Failed . Scroll down to view the Distributed workloads table, which lists all of the distributed workloads in the selected project. The table provides the priority, status, creation date, and latest message for each distributed workload. The latest message provides more information about the current status of the distributed workload. Review the latest message to identify any corrective action needed. For example, a distributed workload might be Inadmissible because the requested resources exceed the available resources. In such cases, you can either reconfigure the distributed workload to reduce the requested resources, or reconfigure the cluster queue for the project to increase the resource quota. Verification On the Distributed workload status tab, the graph provides a summarized view of the status of all distributed workloads in the selected project, and the table provides more details about the status of each distributed workload. 4.3. Viewing Kueue alerts for distributed workloads In OpenShift AI, you can view Kueue alerts for your cluster. Each alert provides a link to a runbook . The runbook provides instructions on how to resolve the situation that triggered the alert. Prerequisites You have logged in to OpenShift with the cluster-admin role. You can access a data science cluster that is configured to run distributed workloads as described in Managing distributed workloads . You can access a data science project that contains a workbench, and the workbench is running a default notebook image that contains the CodeFlare SDK, for example, the Standard Data Science notebook. For information about projects and workbenches, see Working on data science projects . You have logged in to Red Hat OpenShift AI. Your data science project contains distributed workloads. Procedure In the OpenShift console, in the Administrator perspective, click Observe Alerting . Click the Alerting rules tab to view a list of alerting rules for default and user-defined projects. The Severity column indicates whether the alert is informational, a warning, or critical. The Alert state column indicates whether a rule is currently firing. Click the name of an alerting rule to see more details, such as the condition that triggers the alert. The following table summarizes the alerting rules for Kueue resources. Table 4.1. Alerting rules for Kueue resources Severity Name Alert condition Critical KueuePodDown The Kueue pod is not ready for a period of 5 minutes. Info LowClusterQueueResourceUsage Resource usage in the cluster queue is below 20% of its nominal quota for more than 1 day. Resource usage refers to any resources listed in the cluster queue, such as CPU, memory, and so on. Info ResourceReservationExceedsQuota Resource reservation is 10 times the available quota in the cluster queue. Resource reservation refers to any resources listed in the cluster queue, such as CPU, memory, and so on. Info PendingWorkloadPods A pod has been in a Pending state for more than 3 days. If the Alert state of an alerting rule is set to Firing , complete the following steps: Click Observe Alerting and then click the Alerts tab. Click each alert for the firing rule, to see more details. Note that a separate alert is fired for each resource type affected by the alerting rule. On the alert details page, in the Runbook section, click the link to open a GitHub page that provides troubleshooting information. Complete the runbook steps to identify the cause of the alert and resolve the situation. Verification After you resolve the cause of the alert, the alerting rule stops firing.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_distributed_workloads/monitoring-distributed-workloads_distributed-workloads
|
Chapter 3. Setting up the environment for an OpenShift installation
|
Chapter 3. Setting up the environment for an OpenShift installation 3.1. Installing RHEL on the provisioner node With the configuration of the prerequisites complete, the step is to install RHEL 9.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure . Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 3.3. Checking NTP server synchronization The OpenShift Container Platform installation program installs the chrony Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the chrony service. For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the Additional resources section. Prerequisites You installed the chrony package on the target node. Procedure Log in to the node by using the ssh command. View the NTP servers available to the node by running the following command: USD chronyc sources Example output MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms Use the ping command to ensure that the node can access an NTP server, for example: USD ping time.cloudflare.com Example output PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms ... Additional resources Optional: Configuring NTP for disconnected clusters Network Time Protocol (NTP) 3.4. Configuring networking Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a bare-metal bridge and network, and an optional provisioning bridge and network. Note You can also configure networking from the web console. Procedure Export the bare-metal network NIC name by running the following command: USD export PUB_CONN=<baremetal_nic_name> Configure the bare-metal network: Note The SSH connection might disconnect after executing these steps. For a network using DHCP, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " 1 Replace <con_name> with the connection name. For a network using static IP addressing and no DHCP network, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr "x.x.x.x/yy" ipv4.gateway "a.a.a.a" ipv4.dns "b.b.b.b" 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal nmcli con up baremetal " 1 Replace <con_name> with the connection name. Replace x.x.x.x/yy with the IP address and CIDR for the network. Replace a.a.a.a with the network gateway. Replace b.b.b.b with the IP address of the DNS server. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name by running the following command: USD export PROV_CONN=<prov_nic_name> Optional: If you are deploying with a provisioning network, configure the provisioning network by running the following command: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The SSH connection might disconnect after executing these steps. The IPv6 address can be any address that is not routable through the bare-metal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection by running the following command: USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual SSH back into the provisioner node (if required) by running the following command: # ssh kni@provisioner.<cluster-name>.<domain> Verify that the connection bridges have been properly created by running the following command: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 3.5. Establishing communication between subnets In a typical OpenShift Container Platform cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing OpenShift Container Platform to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. Important All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. Deploying a cluster with multiple subnets requires using virtual media. This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet. In this procedure, the cluster spans two subnets: The first subnet ( 10.0.0.0 ) contains the control plane and local worker nodes. The second subnet ( 192.168.0.0 ) contains the edge worker nodes. Procedure Configure the first subnet to communicate with the second subnet: Log in as root to a control plane node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the second subnet ( 192.168.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each control plane node in the first subnet. Note Adjust the commands to match your actual interface names and gateway. Configure the second subnet to communicate with the first subnet: Log in as root to a remote worker node by running the following command: USD sudo su - Get the name of the network interface by running the following command: # nmcli dev status Add a route to the first subnet ( 10.0.0.0 ) via the gateway by running the following command: # nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" Apply the changes by running the following command: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully by running the following command: # ip route Repeat the steps for each worker node in the second subnet. Note Adjust the commands to match your actual interface names and gateway. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet by running the following command: USD ping <remote_worker_node_ip_address> If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet by running the following command: USD ping <control_plane_node_ip_address> If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you do not receive a response, review the network configurations and repeat the procedure for the node. 3.6. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.15 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 3.7. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 3.8. Optional: Creating an RHCOS images cache To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios. Warning If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the image and place it in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} Confirm SELinux type is of httpd_sys_content_t for the new file: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ 1 -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ registry.access.redhat.com/ubi9/httpd-24 1 Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. Generate the bootstrapOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . See the "Configuring the install-config.yaml file" section for additional details. 3.9. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, NetworkManager sets the hostnames. By default, DHCP provides the hostnames to NetworkManager , which is the recommended method. NetworkManager gets the hostnames through a reverse DNS lookup in the following cases: If DHCP does not provide the hostnames If you use kernel arguments to set the hostnames If you use another method to set the hostnames Reverse DNS lookup occurs after the network has been initialized on a node, and can increase the time it takes NetworkManager to set the hostname. Other system services can start prior to NetworkManager setting the hostname, which can cause those services to use a default hostname such as localhost . Tip You can avoid the delay in setting hostnames by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.10. Configuring the install-config.yaml file 3.10.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey : apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one worker. 2 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare-metal network. 3 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare-metal network. 4 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticDNS configuration setting to specify the DNS address for the bootstrap VM when there is no DHCP server on the bare-metal network. 5 See the BMC addressing sections for more options. 6 To set the path to the installation disk drive, enter the kernel name of the disk. For example, /dev/sda . Important Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. To use the disk WWN, replace the deviceName parameter with the wwnWithExtension parameter. Depending on the parameter that you use, enter either of the following values: The disk name. For example, /dev/sda , or /dev/disk/by-path/ . The disk WWN. For example, "0x64cd98f04fde100024684cf3034da5c2" . Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value. Failure to meet these requirements for the rootDeviceHints parameter might result in the following error: ironic-inspector inspection failed: No disks satisfied root device hints Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file to the new directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 3.10.2. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 3.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticDNS The static network DNS of the bootstrap node. You must set this value when deploying a cluster with static IP addresses when there is no Dynamic Host Configuration Protocol (DHCP) server on the bare-metal network. If you do not set this value, the installation program will use the value from bootstrapExternalStaticGateway , which causes problems when the IP address values of the gateway and DNS are different. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 3.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 3.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 3.10.3. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Redfish APIs Several redfish API endpoints are called onto your BCM when using the bare-metal installer-provisioned infrastructure. Important You need to ensure that your BMC supports all of the redfish APIs before installation. List of redfish APIs Power on curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "On"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Power off curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "ForceOff"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Temporary boot using pxe curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} Set BIOS boot mode using Legacy or UEFI curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} List of redfish-virtualmedia APIs Set temporary boot device using cd or dvd curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' Mount virtual media curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: *" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' Note The PowerOn and PowerOff commands for redfish APIs are the same for the redfish-virtualmedia APIs. Important HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes . 3.10.4. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. Note Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. Note Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 and all releases up to including the 5.xx series for installer-provisioned installations on bare metal deployments. The virtual console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . 3.10.5. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 3.4. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 3.10.6. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 3.5. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 3.10.7. BMC addressing for Cisco CIMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Cisco UCS UCSX-210C-M6 hardware, Red Hat supports Cisco Integrated Management Controller (CIMC). Table 3.6. BMC address format for Cisco CIMC Protocol Address Format Redfish virtual media redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> To enable Redfish virtual media for Cisco UCS UCSX-210C-M6 hardware, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration by using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True 3.10.8. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 3.7. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 3.10.9. Optional: Setting proxy settings To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file. apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Note When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 3.10.10. Optional: Deploying with no provisioning network To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 3.10.11. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important On a bare-metal platform, if you specified an NMState configuration in the networkConfig section of your install-config.yaml file, add interfaces.wait-ip: ipv4+ipv6 to the NMState YAML file to resolve an issue that prevents your cluster from deploying on a dual-stack network. Example NMState YAML configuration file that includes the wait-ip parameter networkConfig: nmstate: interfaces: - name: <interface_name> # ... wait-ip: ipv4+ipv6 # ... To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> Note For a cluster with dual-stack networking configuration, you must assign both IPv4 and IPv6 addresses to the same interface. 3.10.12. Optional: Configuring host network interfaces Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState. The most common use case for this functionality is to specify a static IP address on the bare-metal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings. Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax. Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster. Create an NMState YAML file: interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 4 -hop-interface: <next_hop_nic1_name> 5 1 2 3 4 5 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> Replace <nmstate_yaml_file> with the configuration file name. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 5 -hop-interface: <next_hop_nic1_name> 6 1 Add the NMState YAML syntax to configure the host interfaces. 2 3 4 5 6 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Important After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. 3.10.13. Configuring host network interfaces for subnets For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios. Important When using the default load balancer, OpenShiftManagedDefault and adding remote nodes to your OpenShift Container Platform cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the machineNetwork configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the networkConfig parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures that the remote nodes can reach the subnet containing the control plane and that they can receive network traffic from the control plane. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia or idrac-virtualmedia , because remote nodes cannot access the local provisioning network. Procedure Add the subnets to the machineNetwork in the install-config.yaml file when using static IP addresses: networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes Add the gateway and DNS configuration to the networkConfig parameter of each edge compute node using NMState syntax when using a static IP address or advanced networking such as bonds: networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4 1 Replace <interface_name> with the interface name. 2 Replace <node_ip> with the IP address of the node. 3 Replace <gateway_ip> with the IP address of the gateway. 4 Replace <dns_ip> with the IP address of the DNS server. 3.10.14. Optional: Configuring address generation modes for SLAAC in dual-stack networks For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the ipv6.addr-gen-mode network setting. You can set this value using NMState to configure the RAM disk and the cluster configuration files. If you do not configure a consistent ipv6.addr-gen-mode in these locations, IPv6 address mismatches can occur between CSR resources and BareMetalHost resources in the cluster. Prerequisites Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState YAML syntax with the nmstatectl gc command before including it in the install-config.yaml file because the installation program will not check the NMState YAML syntax. Create an NMState YAML file: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> 1 1 Replace <nmstate_yaml_file> with the name of the test configuration file. Add the NMState configuration to the hosts.networkConfig section within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 ... 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . 3.10.15. Optional: Configuring host network interfaces for dual port NIC Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces by using NMState to support dual port NIC. Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes by using Kubernetes NMState after deployment or when expanding the cluster. Procedure Add the NMState configuration to the networkConfig field to hosts within the install-config.yaml file: hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field has information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates a ethernet interface. 5 Set this to `false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The bond uses the primary device as the first device of the bonding interfaces. The bond does not abandon the primary device interface unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Important After deploying the cluster, you cannot change the networkConfig configuration setting of the install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. Additional resources Configuring network bonding 3.10.16. Configuring multiple cluster nodes You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster. Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND , as shown in the following example: hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND Note Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure. 3.10.17. Optional: Configuring managed Secure Boot You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 3.11. Manifest configuration files 3.11.1. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 3.11.2. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.15.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 3.11.3. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Important When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 3.11.4. Optional: Deploying routers on worker nodes During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Important Deploying a cluster with only one worker node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one worker, the cluster loses high availability for the ingress API, which is not suitable for production environments. Note By default, the installer deploys two routers. If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. Procedure Create a router-replicas.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one worker node, set replicas: to 1 . If working with more than 3 worker nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory: USD cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 3.11.5. Optional: Configuring the BIOS The following procedure configures the BIOS during the installation process. Procedure Create the manifests. Modify the BareMetalHost resource file corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Add the BIOS configuration to the spec section of the BareMetalHost resource: spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true Note Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported. Create the cluster. Additional resources Bare-metal configuration 3.11.6. Optional: Configuring the RAID The following procedure configures a redundant array of independent disks (RAID) using baseboard management controllers (BMCs) during the installation process. Note If you want to configure a hardware RAID for the node, verify that the node has a supported RAID controller. OpenShift Container Platform 4.15 does not support software RAID. Table 3.8. Hardware RAID support by vendor Vendor BMC and protocol Firmware version RAID levels Fujitsu iRMC N/A 0, 1, 5, 6, and 10 Dell iDRAC with Redfish Version 6.10.30.20 or later 0, 1, and 5 Procedure Create the manifests. Modify the BareMetalHost resource corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Note The following example uses a hardware RAID configuration because OpenShift Container Platform 4.15 does not support software RAID. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example: spec: raid: hardwareRAIDVolumes: - level: "0" 1 name: "sda" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0 1 level is a required field, and the others are optional fields. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example: spec: raid: hardwareRAIDVolumes: [] If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed. Create the cluster. 3.11.7. Optional: Configuring storage on nodes You can make changes to operating systems on OpenShift Container Platform nodes by creating MachineConfig objects that are managed by the Machine Config Operator (MCO). The MachineConfig specification includes an ignition config for configuring the machines at first boot. This config object can be used to modify files, systemd services, and other operating system features running on OpenShift Container Platform machines. Procedure Use the ignition config to configure storage on nodes. The following MachineSet manifest example demonstrates how to add a partition to a device on a primary node. In this example, apply the manifest before installation to have a partition named recovery with a size of 16 GiB on the primary node. Create a custom-partitions.yaml file and include a MachineConfig object that contains your partition layout: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs Save and copy the custom-partitions.yaml file to the clusterconfigs/openshift directory: USD cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift Additional resources Bare-metal configuration Partition naming scheme 3.12. Creating a disconnected registry In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections. Prerequisites If you have already prepared a mirror registry for Mirroring images for a disconnected installation , you can skip directly to Modify the install-config.yaml file to use the disconnected registry . 3.12.1. Preparing the registry node to host the mirrored registry The following steps must be completed prior to hosting a mirrored registry on bare metal. Procedure Open the firewall port on the registry node: USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node: USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held: USD sudo mkdir -p /opt/registry/{auth,certs,data} 3.12.2. Mirroring the OpenShift Container Platform image repository for a disconnected registry Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. Procedure Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-baremetal-install 3.12.3. Modify the install-config.yaml file to use the disconnected registry On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file: USD echo "additionalTrustBundle: |" >> install-config.yaml The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file: USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml 3.13. Validation checklist for installation ❏ OpenShift Container Platform installer has been retrieved. ❏ OpenShift Container Platform installer has been extracted. ❏ Required parameters for the install-config.yaml have been configured. ❏ The hosts parameter for the install-config.yaml has been configured. ❏ The bmc parameter for the install-config.yaml has been configured. ❏ Conventions for the values configured in the bmc address field have been applied. ❏ Created the OpenShift Container Platform manifests. ❏ (Optional) Deployed routers on worker nodes. ❏ (Optional) Created a disconnected registry. ❏ (Optional) Validate disconnected registry settings if in use.
|
[
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_worker_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.15",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 bootstrapExternalStaticDNS: <bootstrap_static_dns> 4 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 5 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 6 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: *\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<server_kvm_ip>/redfish/v1/Systems/<serial_number> username: <user> password: <password> disableCertificateVerification: True",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.15.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installation-workflow
|
Chapter 9. Configuring Desktop with GSettings and dconf
|
Chapter 9. Configuring Desktop with GSettings and dconf 9.1. Terminology Explained: GSettings, gsettings, and dconf This section defines several terms that are easily confused. dconf dconf is a key-based configuration system which manages user settings. It is the back end for GSettings used in Red Hat Enterprise Linux 7. dconf manages a range of different settings, including GDM , application, and proxy settings. dconf The dconf command-line utility is used for reading and writing individual values or entire directories from and to a dconf database. GSettings GSettings is a high-level API for application settings, front end for dconf . gsettings The gsettings command-line tool is used to view and change user settings.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/configuration-overview-gsettings-dconf
|
Chapter 3. Using the command line interface
|
Chapter 3. Using the command line interface The command line interface (CLI) allows interaction with the message broker by use of an interactive terminal. Manage broker actions, configure messages, and enter useful commands by using the CLI. The command line interface (CLI) allows users and roles to be added to files, by using an interactive process. 3.1. Starting broker instances A broker instance is a directory containing all the configuration and runtime data, such as logs and data files. The runtime data is associated with a unique broker process. You can start a broker in the foreground by using the artemis script, as a Linux service, or as a Windows service. 3.1.1. Starting the broker instance After the broker instance is created, you use the artemis run command to start it. Procedure Switch to the user account you created during installation. USD su - amq-broker Use the artemis run command to start the broker instance. The broker starts and displays log output with the following information: The location of the transaction logs and cluster configuration. The type of journal being used for message persistence (AIO in this case). The URI(s) that can accept client connections. By default, port 61616 can accept connections from any of the supported protocols (CORE, MQTT, AMQP, STOMP, HORNETQ, and OPENWIRE). There are separate, individual ports for each protocol as well. The web console is available at http://localhost:8161 . The Jolokia service (JMX over REST) is available at http://localhost:8161/jolokia . 3.1.2. Starting a broker as a Linux service If the broker is installed on Linux, you can run it as a service. Procedure Create a new amq-broker.service file in the /etc/systemd/system/ directory. Copy the following text into the file. Modify the path and user fields according to the information provided during the broker instance creation. In the example below, the user amq-broker starts the broker service installed under the /var/opt/amq-broker/mybroker/ directory. Open a terminal. Enable the broker service using the following command: Run the broker service using the following command: 3.1.3. Starting a broker as a Windows service If the broker is installed on Windows, you can run it as a service. Procedure Open a command prompt to enter the commands Install the broker as a service with the following command: Start the service by using the following command: (Optional) Uninstall the service: 3.2. Stopping broker instances Stop the broker instance manually or configure the broker to shutdown gracefully. 3.2.1. Stopping the broker instance After creating the standalone broker and producing and consuming test messages, you can stop the broker instance. This procedure manually stops the broker, which forcefully closes all client connections. In a production environment, you should configure the broker to stop gracefully so that client connections can be closed properly. Procedure Use the artemis stop command to stop the broker instance: USD /var/opt/amq-broker/mybroker/bin/artemis stop 2018-12-03 14:37:30,630 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [b6c244ef-f1cb-11e8-a2d7-0800271b03bd] stopped, uptime 35 minutes Server stopped! 3.2.2. Stopping a broker instance gracefully A manual shutdown forcefully disconnects all clients after a stop command is entered. As an alternative, configure the broker to shut down gracefully by using the graceful-shutdown-enabled configuration element. When graceful-shutdown-enabled is set to true , no new client connections are allowed after a stop command is entered. However, existing connections are allowed to close on the client-side before the shutdown process is started. The default value for graceful-shutdown-enabled is false . Use the graceful-shutdown-timeout configuration element to set a length of time, in milliseconds, for clients to disconnect before connections are forcefully closed from the broker side. After all connections are closed, the shutdown process is started. One advantage of using graceful-shutdown-timeout is that it prevents client connections from delaying a shutdown. The default value for graceful-shutdown-timeout is -1 , meaning the broker waits indefinitely for clients to disconnect. The following procedure demonstrates how to configure a graceful shutdown that uses a timeout. Procedure Open the configuration file <broker_instance_dir> \etc\broker.xml . Add the graceful-shutdown-enabled configuration element and set the value to true . <configuration> <core> ... <graceful-shutdown-enabled> true </graceful-shutdown-enabled> ... </core> </configuration> Add the graceful-shutdown-timeout configuration element and set a value for the timeout in milliseconds. In the following example, client connections are forcefully closed 30 seconds ( 30000 milliseconds) after the stop command is issued. <configuration> <core> ... <graceful-shutdown-enabled> true </graceful-shutdown-enabled> <graceful-shutdown-timeout> 30000 </graceful-shutdown-timeout> ... </core> </configuration> 3.3. Auditing messages by intercepting packets Intercept packets entering or exiting the broker, to audit packets or filter messages. Interceptors change the packets that they intercept. This makes interceptors powerful, but also potentially dangerous. Develop interceptors to meet your business requirements. Interceptors are protocol specific and must implement the appropriate interface. Interceptors must implement the intercept() method, which returns a boolean value. If the value is true , the message packet continues onward. If false , the process is aborted, no other interceptors are called, and the message packet is not processed further. 3.3.1. Creating interceptors Interceptors can change the packets they intercept. You can create your own incoming and outgoing interceptors. All interceptors are protocol specific and are called for any packet entering or exiting the server respectively. This allows you to create interceptors to meet business requirements such as auditing packets. Interceptors and their dependencies must be placed in the Java classpath of the broker. You can use the <broker_instance_dir> /lib directory because it is part of the classpath by default. The following examples demonstrate how to create an interceptor that checks the size of each packet passed to it. Note The examples implement a specific interface for each protocol. Procedure Implement the appropriate interface and override its intercept() method. If you are using the AMQP protocol, implement the org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor interface. package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This AMQPMessage has an acceptable size."); return true; } return false; } } If you are using Core Protocol, your interceptor must implement the org.apache.artemis.activemq.api.core.Interceptor interface. package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This Packet has an acceptable size."); return true; } return false; } } If you are using the MQTT protocol, implement the org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println("This MqttMessage has an acceptable size."); return true; } return false; } } If you are using the STOMP protocol, implement the org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This StompFrame has an acceptable size."); return true; } return false; } } 3.3.2. Configuring the broker to use interceptors Prerequisites Create an interceptor class and add it (and its dependencies) to the Java classpath of the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure Open <broker_instance_dir> /etc/broker.xml Configure the broker to use an interceptor by adding configuration to <broker_instance_dir> /etc/broker.xml If the interceptor is intended for incoming messages, add its class-name to the list of remoting-incoming-interceptors . <configuration> <core> ... <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> ... </core> </configuration> If the interceptor is intended for outgoing messages, add its class-name to the list of remoting-outgoing-interceptors . <configuration> <core> ... <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration> 3.3.3. Interceptors on the client side Clients can use interceptors to intercept packets either sent by the client to the server or by the server to the client. If the broker-side interceptor returns a false value, then no other interceptors are called and the client does not process the packet further. This process happens transparently, unless an outgoing packet is sent in a blocking fashion. In this case, an ActiveMQException is thrown to the caller. The ActiveMQException thrown contains the name of the interceptor that returned the false value. On the server, the client interceptor classes and their dependencies must be added to the Java classpath of the client, to be properly instantiated and invoked. 3.4. Checking the health of brokers and queues AMQ Broker includes a command-line utility that enables you to perform various health checks on brokers and queues in your broker topology. The following example shows how to the use the utility to run health checks. Procedure See the list of checks that you can run for a particular broker (that is, node ) in your broker topology. USD <broker_instance_dir> /bin/artemis help check node You see output that describes the set of options that you can use with the artemis check node command. NAME artemis check node - Check a node SYNOPSIS artemis check node [--backup] [--clientID <clientID>] [--diskUsage <diskUsage>] [--fail-at-end] [--live] [--memoryUsage <memoryUsage>] [--name <name>] [--password <password>] [--peers <peers>] [--protocol <protocol>] [--silent] [--timeout <timeout>] [--up] [--url <brokerURL>] [--user <user>] [--verbose] OPTIONS --backup Check that the node has a backup --clientID <clientID> ClientID to be associated with connection --diskUsage <diskUsage> Disk usage percentage to check or -1 to use the max-disk-usage --fail-at-end If a particular module check fails, continue the rest of the checks --live Check that the node has a live --memoryUsage <memoryUsage> Memory usage percentage to check --name <name> Name of the target to check --password <password> Password used to connect --peers <peers> Number of peers to check --protocol <protocol> Protocol used. Valid values are amqp or core. Default=core. --silent It will disable all the inputs, and it would make a best guess for any required input --timeout <timeout> Time to wait for the check execution, in milliseconds --up Check that the node is started, it is executed by default if there are no other checks --url <brokerURL> URL towards the broker. (default: tcp://localhost:61616) --user <user> User used to connect --verbose Adds more information on the execution For example, check that the disk usage of the local broker is below the maximum disk usage configured for the broker. USD <broker_instance_dir> /bin/artemis check node --url tcp://localhost:61616 --diskUsage -1 Connection brokerURL = tcp://localhost:61616 Running NodeCheck Checking that the disk usage is less then the max-disk-usage ... success Checks run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec - NodeCheck In the preceding example, specifying a value of -1 for the --diskUsage option means that the utility checks disk usage against the maximum disk usage configured for the broker. The maximum disk usage of a broker is configured using the max-disk-usage parameter in the broker.xml configuration file. The value specified for max-disk-usage represents the percentage of available physical disk space that the broker is allowed to consume. See the list of checks that you can run for a particular queue in your broker topology. USD <broker_instance_dir> /bin/artemis help check queue You see output that describes the set of options that you can use with the artemis check queue command. NAME artemis check queue - Check a queue SYNOPSIS artemis check queue [--browse <browse>] [--clientID <clientID>] [--consume <consume>] [--fail-at-end] [--name <name>] [--password <password>] [--produce <produce>] [--protocol <protocol>] [--silent] [--timeout <timeout>] [--up] [--url <brokerURL>] [--user <user>] [--verbose] OPTIONS --browse <browse> Number of the messages to browse or -1 to check that the queue is browsable --clientID <clientID> ClientID to be associated with connection --consume <consume> Number of the messages to consume or -1 to check that the queue is consumable --fail-at-end If a particular module check fails, continue the rest of the checks --name <name> Name of the target to check --password <password> Password used to connect --produce <produce> Number of the messages to produce --protocol <protocol> Protocol used. Valid values are amqp or core. Default=core. --silent It will disable all the inputs, and it would make a best guess for any required input --timeout <timeout> Time to wait for the check execution, in milliseconds --up Check that the queue exists and is not paused, it is executed by default if there are no other checks --url <brokerURL> URL towards the broker. (default: tcp://localhost:61616) --user <user> User used to connect --verbose Adds more information on the execution The utility can execute multiple options with a single command. For example, to check production, browsing, and consumption of 1000 messages on the default helloworld queue on the local broker, use the following command: USD <broker_instance_dir> /bin/artemis check queue --name helloworld --produce 1000 --browse 1000 --consume 1000 Connection brokerURL = tcp://localhost:61616 Running QueueCheck Checking that a producer can send 1000 messages to the queue helloworld ... success Checking that a consumer can browse 1000 messages from the queue helloworld ... success Checking that a consumer can consume 1000 messages from the queue helloworld ... success Checks run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.882 sec - QueueCheck In the preceding example, observe that you did not specify a broker URL when running the queue check. If you do not explicitly specify a URL, the utility uses a default value of tcp://localhost:61616 . 3.5. Command line tools AMQ Broker includes a set of command line interface (CLI) tools, so you can manage your messaging journal. The table below lists the name for each tool and its corresponding description. Tool Description address Addresses tool groups (create/delete/update/show) (example ./artemis address create ). browser Browses messages on an instance. consumer Consumes messages on an instance. data Prints reports about journal records and compacts the data. decode Imports the internal journal format from encode. encode Shows an internal format of the journal encoded to String. exp Exports the message data using a special and independent XML format. help Displays help information. imp Imports the journal to a running broker using the output provided by exp . kill Kills a broker instance started with --allow-kill. mask Masks a password and prints it out. perf-journal Calculates the journal-buffer timeout you should use with the current data folder. queue Queues tool groups (create/delete/update/stat) (example ./artemis queue create ). run Runs the broker instance. stop Stops the broker instance. user Default file-based user managament (add/rm/list/reset) (example ./artemis user list ) For a full list of commands available for each tool, use the help parameter followed by the tool's name. For instance, in the example below, the CLI output lists all the commands available to the data tool after the user enters the command ./artemis help data . You can use the help parameter for more information on how to execute each of the commands. For example, the CLI lists more information about the data print command after the user enters the ./artemis help data print .
|
[
"su - amq-broker",
"/var/opt/amq-broker/mybroker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat JBoss AMQ 7.2.1.GA 10:53:43,959 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 10:53:44,076 INFO [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=false,journalDirectory=./data/journal,bindingsDirectory=./data/bindings,largeMessagesDirectory=./data/large-messages,pagingDirectory=./data/paging) 10:53:44,099 INFO [org.apache.activemq.artemis.core.server] AMQ221012: Using AIO Journal",
"[Unit] Description=AMQ Broker After=syslog.target network.target [Service] ExecStart=/var/opt/amq-broker/mybroker/bin/artemis run Restart=on-failure User=amq-broker Group=amq-broker A workaround for Java signal handling SuccessExitStatus=143 [Install] WantedBy=multi-user.target",
"sudo systemctl enable amq-broker",
"sudo systemctl start amq-broker",
"<broker_instance_dir> \\bin\\artemis-service.exe install",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"<broker_instance_dir> \\bin\\artemis-service.exe uninstall",
"/var/opt/amq-broker/mybroker/bin/artemis stop 2018-12-03 14:37:30,630 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [b6c244ef-f1cb-11e8-a2d7-0800271b03bd] stopped, uptime 35 minutes Server stopped!",
"<configuration> <core> <graceful-shutdown-enabled> true </graceful-shutdown-enabled> </core> </configuration>",
"<configuration> <core> <graceful-shutdown-enabled> true </graceful-shutdown-enabled> <graceful-shutdown-timeout> 30000 </graceful-shutdown-timeout> </core> </configuration>",
"package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This AMQPMessage has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This MqttMessage has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This StompFrame has an acceptable size.\"); return true; } return false; } }",
"<configuration> <core> <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> </core> </configuration>",
"<configuration> <core> <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration>",
"<broker_instance_dir> /bin/artemis help check node",
"NAME artemis check node - Check a node SYNOPSIS artemis check node [--backup] [--clientID <clientID>] [--diskUsage <diskUsage>] [--fail-at-end] [--live] [--memoryUsage <memoryUsage>] [--name <name>] [--password <password>] [--peers <peers>] [--protocol <protocol>] [--silent] [--timeout <timeout>] [--up] [--url <brokerURL>] [--user <user>] [--verbose] OPTIONS --backup Check that the node has a backup --clientID <clientID> ClientID to be associated with connection --diskUsage <diskUsage> Disk usage percentage to check or -1 to use the max-disk-usage --fail-at-end If a particular module check fails, continue the rest of the checks --live Check that the node has a live --memoryUsage <memoryUsage> Memory usage percentage to check --name <name> Name of the target to check --password <password> Password used to connect --peers <peers> Number of peers to check --protocol <protocol> Protocol used. Valid values are amqp or core. Default=core. --silent It will disable all the inputs, and it would make a best guess for any required input --timeout <timeout> Time to wait for the check execution, in milliseconds --up Check that the node is started, it is executed by default if there are no other checks --url <brokerURL> URL towards the broker. (default: tcp://localhost:61616) --user <user> User used to connect --verbose Adds more information on the execution",
"<broker_instance_dir> /bin/artemis check node --url tcp://localhost:61616 --diskUsage -1 Connection brokerURL = tcp://localhost:61616 Running NodeCheck Checking that the disk usage is less then the max-disk-usage ... success Checks run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.065 sec - NodeCheck",
"<broker_instance_dir> /bin/artemis help check queue",
"NAME artemis check queue - Check a queue SYNOPSIS artemis check queue [--browse <browse>] [--clientID <clientID>] [--consume <consume>] [--fail-at-end] [--name <name>] [--password <password>] [--produce <produce>] [--protocol <protocol>] [--silent] [--timeout <timeout>] [--up] [--url <brokerURL>] [--user <user>] [--verbose] OPTIONS --browse <browse> Number of the messages to browse or -1 to check that the queue is browsable --clientID <clientID> ClientID to be associated with connection --consume <consume> Number of the messages to consume or -1 to check that the queue is consumable --fail-at-end If a particular module check fails, continue the rest of the checks --name <name> Name of the target to check --password <password> Password used to connect --produce <produce> Number of the messages to produce --protocol <protocol> Protocol used. Valid values are amqp or core. Default=core. --silent It will disable all the inputs, and it would make a best guess for any required input --timeout <timeout> Time to wait for the check execution, in milliseconds --up Check that the queue exists and is not paused, it is executed by default if there are no other checks --url <brokerURL> URL towards the broker. (default: tcp://localhost:61616) --user <user> User used to connect --verbose Adds more information on the execution",
"<broker_instance_dir> /bin/artemis check queue --name helloworld --produce 1000 --browse 1000 --consume 1000 Connection brokerURL = tcp://localhost:61616 Running QueueCheck Checking that a producer can send 1000 messages to the queue helloworld ... success Checking that a consumer can browse 1000 messages from the queue helloworld ... success Checking that a consumer can consume 1000 messages from the queue helloworld ... success Checks run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.882 sec - QueueCheck",
"./artemis help data NAME artemis data - data tools group (print|imp|exp|encode|decode|compact) (example ./artemis data print) SYNOPSIS artemis data artemis data compact [--broker <brokerConfig>] [--verbose] [--paging <paging>] [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] artemis data decode [--broker <brokerConfig>] [--suffix <suffix>] [--verbose] [--paging <paging>] [--prefix <prefix>] [--file-size <size>] [--directory <directory>] --input <input> [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] artemis data encode [--directory <directory>] [--broker <brokerConfig>] [--suffix <suffix>] [--verbose] [--paging <paging>] [--prefix <prefix>] [--file-size <size>] [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] artemis data exp [--broker <brokerConfig>] [--verbose] [--paging <paging>] [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] artemis data imp [--host <host>] [--verbose] [--port <port>] [--password <password>] [--transaction] --input <input> [--user <user>] artemis data print [--broker <brokerConfig>] [--verbose] [--paging <paging>] [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] COMMANDS With no arguments, Display help information print Print data records information (WARNING: don't use while a production server is running)",
"./artemis help data print NAME artemis data print - Print data records information (WARNING: don't use while a production server is running) SYNOPSIS artemis data print [--bindings <binding>] [--journal <journal>] [--paging <paging>] OPTIONS --bindings <binding> The folder used for bindings (default ../data/bindings) --journal <journal> The folder used for messages journal (default ../data/journal) --paging <paging> The folder used for paging (default ../data/paging)"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/managing_amq_broker/assembly-using-command-line-interface-managing
|
Installing on AWS
|
Installing on AWS OpenShift Container Platform 4.15 Installing OpenShift Container Platform on Amazon Web Services Red Hat OpenShift Documentation Team
|
[
"platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2",
"compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole",
"controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{\"auths\": ...}' 20",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{\"auths\": ...}' 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"subnets: - subnet-1 - subnet-2 - subnet-3",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\": ...}' 22",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }",
"aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones",
"aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-phx-2a rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: edge-zone networking: clusterNetworkMTU: 8901 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #",
"apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]",
"platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1",
"./openshift-install create manifests --dir <installation_directory>",
"spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m",
"oc get machines -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-nyc-1a-wbclh Running c5d.2xlarge us-east-1 us-east-1-nyc-1a 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h",
"oc get nodes -l node-role.kubernetes.io/edge",
"NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DeleteCarrierGateway\", \"ec2:CreateCarrierGateway\" ], \"Resource\": \"*\" }, { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }",
"aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=wavelength-zone --all-availability-zones",
"aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: type: r5.2xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-edgezone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <wavelength_zone_name> #",
"apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-wl1-lax-wlz-1 - us-west-2-wl1-las-wlz-1 pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters \\// ParameterKey=VpcId,ParameterValue=\"USD{VpcId}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{ClusterName}\" 4",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: \"AWS::EC2::CarrierGateway\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"cagw\"]] PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public-carrier\"]] PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]",
"platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicOrPrivateSubnetID-Wavelength-1",
"./openshift-install create manifests --dir <installation_directory>",
"spec: template: spec: providerSpec: value: publicIp: true subnet: filters: - name: tag:Name values: - USD{INFRA_ID}-public-USD{ZONE_NAME}",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <infrastructure_id>-edge-<zone> namespace: openshift-machine-api spec: template: spec: providerSpec: value: publicIp: true",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m",
"oc get machines -n openshift-machine-api",
"NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-wl1-nyc-wlz-1-wbclh Running c5d.2xlarge us-east-1 us-east-1-wl1-nyc-wlz-1 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h",
"oc get nodes -l node-role.kubernetes.io/edge",
"NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructures.config.openshift.io cluster",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m",
"oc get machinesets.machine.openshift.io <compute_machine_set_name_1> -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}'",
"oc get machinesets.machine.openshift.io <compute_machine_set_name_1> -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet.id}'",
"aws outposts list-outposts",
"aws outposts get-outpost-instance-types --outpost-id <outpost_id_value>",
"aws ec2 describe-subnets --filters Name=outpost-arn,Values=<outpost_arn_value>",
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 1000 } , \"machine\": { \"to\" : 1100} } } } }'",
"oc get machineconfigpools",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'",
"oc get machineconfigpools",
"oc describe network.config cluster",
"aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" \\ 9 ParameterKey=PrivateSubnetLabel,ParameterValue=\"private-outpost\" ParameterKey=PublicSubnetLabel,ParameterValue=\"public-outpost\" ParameterKey=OutpostArn,ParameterValue=\"USD{OUTPOST_ARN}\" 10",
"arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f",
"aws cloudformation describe-stacks --stack-name <stack_name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String PrivateSubnetLabel: Default: \"private\" Description: Subnet label to be added when building the subnet name. Type: String PublicSubnetLabel: Default: \"public\" Description: Subnet label to be added when building the subnet name. Type: String OutpostArn: Default: \"\" Description: OutpostArn when creating subnets on AWS Outpost. Type: String Conditions: OutpostEnabled: !Not [!Equals [!Ref \"OutpostArn\", \"\"]] Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref \"AWS::NoValue\"] Tags: - Key: Name Value: !Join ['-', [ !Ref ClusterName, !Ref PublicSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 1 Value: true PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName OutpostArn: !If [ OutpostEnabled, !Ref OutpostArn, !Ref \"AWS::NoValue\"] Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, !Ref PrivateSubnetLabel, !Ref ZoneName]] - Key: kubernetes.io/cluster/unmanaged 2 Value: true PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m",
"oc get machinesets.machine.openshift.io <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml",
"oc get machinesets.machine.openshift.io <original_machine_set_name_1> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<availability_zone> spec: providerSpec: 3",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-outposts-<availability_zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: outposts machine.openshift.io/cluster-api-machine-type: outposts machine.openshift.io/cluster-api-machineset: <infrastructure_id>-outposts-<availability_zone> spec: metadata: labels: node-role.kubernetes.io/outposts: \"\" location: outposts providerSpec: value: ami: id: <ami_id> 3 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: volumeSize: 120 volumeType: gp2 4 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m5.xlarge 5 kind: AWSMachineProviderConfig placement: availabilityZone: <availability_zone> region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg subnet: id: <subnet_id> 7 tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned userDataSecret: name: worker-user-data taints: 8 - key: node-role.kubernetes.io/outposts effect: NoSchedule",
"oc create -f <new_machine_set_name_1>.yaml",
"oc get machinesets.machine.openshift.io -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m",
"oc get -n openshift-machine-api machines.machine.openshift.io -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>",
"NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned m5.xlarge us-east-1 us-east-1a 25s <machine_from_new_2> Provisioning m5.xlarge us-east-1 us-east-1a 25s",
"oc describe machine <machine_from_new_1> -n openshift-machine-api",
"kind: Namespace apiVersion: v1 metadata: name: <application_name> 1 --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <application_name> namespace: <application_namespace> 2 spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 3 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment metadata: name: <application_name> namespace: <application_namespace> spec: selector: matchLabels: app: <application_name> replicas: 1 template: metadata: labels: app: <application_name> location: outposts 4 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 5 node-role.kubernetes.io/outpost: '' tolerations: 6 - key: \"node-role.kubernetes.io/outposts\" operator: \"Equal\" value: \"\" effect: \"NoSchedule\" containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: <application_name> ports: - containerPort: 8080 volumeMounts: - mountPath: \"/mnt/storage\" name: data volumes: - name: data persistentVolumeClaim: claimName: <application_name>",
"oc create -f <application_deployment>.yaml",
"apiVersion: v1 kind: Service 1 metadata: name: <application_name> namespace: <application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <application_name>",
"oc create -f <application_service>.yaml",
"oc get nodes -l location=outposts",
"for NODE in USD(oc get node -l node-role.kubernetes.io/worker --no-headers | grep -v outposts | awk '{printUSD1}'); do oc label node USDNODE <key_name>=<value>; done",
"node1.example.com labeled node2.example.com labeled node3.example.com labeled",
"oc get nodes -l <key_name>=<value>",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.28.5 node2.example.com Ready worker 7h v1.28.5 node3.example.com Ready worker 7h v1.28.5",
"apiVersion: v1 kind: Service metadata: labels: app: <application_name> name: <application_name> namespace: <application_namespace> annotations: service.beta.kubernetes.io/aws-load-balancer-subnets: <aws_subnet> 1 service.beta.kubernetes.io/aws-load-balancer-target-node-labels: <key_name>=<value> 2 spec: ports: - name: http port: 80 protocol: TCP targetPort: 8080 selector: app: <application_name> type: LoadBalancer",
"oc create -f <file_name>.yaml",
"HOST=USD(oc get service <application_name> -n <application_namespace> --template='{{(index .status.loadBalancer.ingress 0).hostname}}')",
"curl USDHOST",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <application_name> annotations: alb.ingress.kubernetes.io/subnets: <subnet_id> 1 spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <application_name> port: number: 80",
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl aws delete --name=<name> \\ 1 --region=<aws_region> 2",
"2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted",
"./openshift-install destroy cluster --dir <installation_directory> \\ 1 --log-level=debug 2",
"aws cloudformation delete-stack --stack-name <local_zone_stack_name>",
"aws cloudformation delete-stack --stack-name <vpc_stack_name>",
"aws cloudformation describe-stacks --stack-name <local_zone_stack_name>",
"aws cloudformation describe-stacks --stack-name <vpc_stack_name>",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"platform: aws: lbType:",
"publish:",
"sshKey:",
"compute: platform: aws: amiID:",
"compute: platform: aws: iamRole:",
"compute: platform: aws: rootVolume: iops:",
"compute: platform: aws: rootVolume: size:",
"compute: platform: aws: rootVolume: type:",
"compute: platform: aws: rootVolume: kmsKeyARN:",
"compute: platform: aws: type:",
"compute: platform: aws: zones:",
"compute: aws: region:",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"controlPlane: platform: aws: amiID:",
"controlPlane: platform: aws: iamRole:",
"controlPlane: platform: aws: rootVolume: iops:",
"controlPlane: platform: aws: rootVolume: size:",
"controlPlane: platform: aws: rootVolume: type:",
"controlPlane: platform: aws: rootVolume: kmsKeyARN:",
"controlPlane: platform: aws: type:",
"controlPlane: platform: aws: zones:",
"controlPlane: aws: region:",
"platform: aws: amiID:",
"platform: aws: hostedZone:",
"platform: aws: hostedZoneRole:",
"platform: aws: serviceEndpoints: - name: url:",
"platform: aws: userTags:",
"platform: aws: propagateUserTags:",
"platform: aws: subnets:",
"platform: aws: preserveBootstrapIgnition:"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_aws/index
|
8.6. Disable SMART Disk Monitoring for Guest Virtual Machines
|
8.6. Disable SMART Disk Monitoring for Guest Virtual Machines SMART disk monitoring can be safely disabled as virtual disks and the physical storage devices are managed by the host physical machine.
|
[
"service smartd stop chkconfig --del smartd"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-tips_and_tricks-disable_smart_disk_monitoring_for_guests
|
Chapter 4. EgressIP [k8s.ovn.org/v1]
|
Chapter 4. EgressIP [k8s.ovn.org/v1] Description EgressIP is a CRD allowing the user to define a fixed source IP for all egress traffic originating from any pods which match the EgressIP resource according to its spec definition. Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of EgressIP. status object Observed status of EgressIP. Read-only. 4.1.1. .spec Description Specification of the desired behavior of EgressIP. Type object Required egressIPs namespaceSelector Property Type Description egressIPs array (string) EgressIPs is the list of egress IP addresses requested. Can be IPv4 and/or IPv6. This field is mandatory. namespaceSelector object NamespaceSelector applies the egress IP only to the namespace(s) whose label matches this definition. This field is mandatory. podSelector object PodSelector applies the egress IP only to the pods whose label matches this definition. This field is optional, and in case it is not set: results in the egress IP being applied to all pods in the namespace(s) matched by the NamespaceSelector. In case it is set: is intersected with the NamespaceSelector, thus applying the egress IP to the pods (in the namespace(s) already matched by the NamespaceSelector) which match this pod selector. 4.1.2. .spec.namespaceSelector Description NamespaceSelector applies the egress IP only to the namespace(s) whose label matches this definition. This field is mandatory. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.3. .spec.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.4. .spec.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.1.5. .spec.podSelector Description PodSelector applies the egress IP only to the pods whose label matches this definition. This field is optional, and in case it is not set: results in the egress IP being applied to all pods in the namespace(s) matched by the NamespaceSelector. In case it is set: is intersected with the NamespaceSelector, thus applying the egress IP to the pods (in the namespace(s) already matched by the NamespaceSelector) which match this pod selector. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.6. .spec.podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.7. .spec.podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.1.8. .status Description Observed status of EgressIP. Read-only. Type object Required items Property Type Description items array The list of assigned egress IPs and their corresponding node assignment. items[] object The per node status, for those egress IPs who have been assigned. 4.1.9. .status.items Description The list of assigned egress IPs and their corresponding node assignment. Type array 4.1.10. .status.items[] Description The per node status, for those egress IPs who have been assigned. Type object Required egressIP node Property Type Description egressIP string Assigned egress IP node string Assigned node name 4.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressips DELETE : delete collection of EgressIP GET : list objects of kind EgressIP POST : create an EgressIP /apis/k8s.ovn.org/v1/egressips/{name} DELETE : delete an EgressIP GET : read the specified EgressIP PATCH : partially update the specified EgressIP PUT : replace the specified EgressIP 4.2.1. /apis/k8s.ovn.org/v1/egressips Table 4.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of EgressIP Table 4.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressIP Table 4.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.5. HTTP responses HTTP code Reponse body 200 - OK EgressIPList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressIP Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body EgressIP schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 201 - Created EgressIP schema 202 - Accepted EgressIP schema 401 - Unauthorized Empty 4.2.2. /apis/k8s.ovn.org/v1/egressips/{name} Table 4.9. Global path parameters Parameter Type Description name string name of the EgressIP Table 4.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an EgressIP Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.12. Body parameters Parameter Type Description body DeleteOptions schema Table 4.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressIP Table 4.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.15. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressIP Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body Patch schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressIP Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body EgressIP schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 201 - Created EgressIP schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/egressip-k8s-ovn-org-v1
|
4.196. nfs-utils-lib
|
4.196. nfs-utils-lib 4.196.1. RHBA-2011:1750 - nfs-utils-lib bug fix update Updated nfs-utils-lib packages that fix one bug are now available for Red Hat Enterprise Linux 6. The nfs-utils-lib packages contain support libraries required by programs in the nfs-utils package. Bug Fix BZ# 711210 Prior to this update, libnfsidmap did not support ldap. With this update, nfs-utils-lib provides ldap support. All users of nfs-utils-lib are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/nfs-utils-lib
|
Chapter 1. Understanding the OpenShift Update Service
|
Chapter 1. Understanding the OpenShift Update Service For clusters with internet accessibility, Red Hat provides over-the-air updates through an OpenShift Container Platform update service as a hosted service located behind public APIs. Note If you are on a restricted network where disconnected clusters cannot access the public APIs, you can install the OpenShift Update Service locally. See Installing and configuring the OpenShift Update Service . 1.1. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the release image for that update to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an upgrade path is not recommended by the OpenShift Update Service, it might be because of a known issue with the update or the target release. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only upgrading to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes as specified by the maxUnavailable field on the machine configuration pool and marks them as unavailable. By default, this value is set to 1 . The MCO then applies the new configuration and reboots the machine. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 1.2. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed.
|
[
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/updating_clusters/understanding-the-update-service
|
Chapter 23. Postfix
|
Chapter 23. Postfix Postfix is an open-source Mail Transport Agent ( MTA ), which supports protocols like LDAP, SMTP AUTH (SASL), and TLS. [22] In Red Hat Enterprise Linux, the postfix package provides Postfix. Enter the following command to see if the postfix package is installed: If it is not installed, use the yum utility root to install it: 23.1. Postfix and SELinux When Postfix is enabled, it runs confined by default. Confined processes run in their own domains, and are separated from other confined processes. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. The following example demonstrates the Postfix and related processes running in their own domain. This example assumes the postfix package is installed and that the Postfix service has been started: Run the getenforce command to confirm SELinux is running in enforcing mode: The command returns Enforcing when SELinux is running in enforcing mode. Enter the following command as the root user to start postfix : Confirm that the service is running. The output should include the information below (only the time stamp will differ): Run following command to view the postfix processes: In the output above, the SELinux context associated with the Postfix master process is system_u:system_r:postfix_master_t:s0 . The second last part of the context, postfix_master_t , is the type for this process. A type defines a domain for processes and a type for files. In this case, the master process is running in the postfix_master_t domain. [22] For more information, see the Postfix section in the System Administrator's Guide .
|
[
"~]USD rpm -q postfix package postfix is not installed",
"~]# yum install postfix",
"~]USD getenforce Enforcing",
"~]# systemctl start postfix.service",
"~]# systemctl status postfix.service postfix.service - Postfix Mail Transport Agent Loaded: loaded (/usr/lib/systemd/system/postfix.service; disabled) Active: active (running) since Mon 2013-08-05 11:38:48 CEST; 3h 25min ago",
"~]USD ps -eZ | grep postfix system_u:system_r:postfix_master_t:s0 1651 ? 00:00:00 master system_u:system_r:postfix_pickup_t:s0 1662 ? 00:00:00 pickup system_u:system_r:postfix_qmgr_t:s0 1663 ? 00:00:00 qmgr"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-postfix
|
Chapter 302. SFTP Component
|
Chapter 302. SFTP Component Available as of Camel version 1.1 This component provides access to remote file systems over the FTP and SFTP protocols. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ftp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> For more information you can look at FTP component 302.1. URI Options The options below are exclusive for the FTPS component. The SFTP component has no options. The SFTP endpoint is configured using URI syntax: with the following path and query parameters: 302.1.1. Path Parameters (3 parameters): Name Description Default Type host Required Hostname of the FTP server String port Port of the FTP server int directoryName The starting directory String 302.1.2. Query Parameters (117 parameters): Name Description Default Type charset (common) This option is used to specify the encoding of the file. You can use this on the consumer, to specify the encodings of the files, which allow Camel to know the charset it should load the file content in case the file content is being accessed. Likewise when writing a file, you can use this option to specify which charset to write the file as well. Do mind that when writing the file Camel may have to read the message content into memory to be able to convert the data into the configured charset, so do not use this if you have big messages. String disconnect (common) Whether or not to disconnect from remote FTP server right after use. Disconnect will only disconnect the current connection to the FTP server. If you have a consumer which you want to stop, then you need to stop the consumer/route instead. false boolean doneFileName (common) Producer: If provided, then Camel will write a 2nd done file when the original file has been written. The done file will be empty. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders. The done file will always be written in the same folder as the original file. Consumer: If provided, Camel will only consume files if a done file exists. This option configures what file name to use. Either you can specify a fixed name. Or you can use dynamic placeholders.The done file is always expected in the same folder as the original file. Only USDfile.name and USDfile.name.noext is supported as dynamic placeholders. String fileName (common) Use Expression such as File Language to dynamically set the filename. For consumers, it's used as a filename filter. For producers, it's used to evaluate the filename to write. If an expression is set, it take precedence over the CamelFileName header. (Note: The header itself can also be an Expression). The expression options support both String and Expression types. If the expression is a String type, it is always evaluated using the File Language. If the expression is an Expression type, the specified Expression type is used - this allows you, for instance, to use OGNL expressions. For the consumer, you can use it to filter filenames, so you can for instance consume today's file using the File Language syntax: mydata-USDdate:now:yyyyMMdd.txt. The producers support the CamelOverruleFileName header which takes precedence over any existing CamelFileName header; the CamelOverruleFileName is a header that is used only once, and makes it easier as this avoids to temporary store CamelFileName and have to restore it afterwards. String jschLoggingLevel (common) The logging level to use for JSCH activity logging. As JSCH is verbose at by default at INFO level the threshold is WARN by default. WARN LoggingLevel separator (common) Sets the path separator to be used. UNIX = Uses unix style path separator Windows = Uses windows style path separator Auto = (is default) Use existing path separator in file name UNIX PathSeparator fastExistsCheck (common) If set this option to be true, camel-ftp will use the list file directly to check if the file exists. Since some FTP server may not support to list the file directly, if the option is false, camel-ftp will use the old way to list the directory and check if the file exists. This option also influences readLock=changed to control whether it performs a fast check to update file information or not. This can be used to speed up the process if the FTP server has a lot of files. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delete (consumer) If true, the file will be deleted after it is processed successfully. false boolean moveFailed (consumer) Sets the move failure expression based on Simple language. For example, to move files into a .error subdirectory use: .error. Note: When moving the files to the fail location Camel will handle the error and will not pick up the file again. String noop (consumer) If true, the file is not moved or deleted in any way. This option is good for readonly data, or for ETL type requirements. If noop=true, Camel will set idempotent=true as well, to avoid consuming the same files over and over again. false boolean preMove (consumer) Expression (such as File Language) used to dynamically set the filename when moving it before processing. For example to move in-progress files into the order directory set this value to order. String preSort (consumer) When pre-sort is enabled then the consumer will sort the file and directory names during polling, that was retrieved from the file system. You may want to do this in case you need to operate on the files in a sorted order. The pre-sort is executed before the consumer starts to filter, and accept files to process by Camel. This option is default=false meaning disabled. false boolean recursive (consumer) If a directory, will look for files in all the sub-directories as well. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean streamDownload (consumer) Sets the download method to use when not using a local working directory. If set to true, the remote files are streamed to the route as they are read. When set to false, the remote files are loaded into memory before being sent into the route. false boolean directoryMustExist (consumer) Similar to startingDirectoryMustExist but this applies during polling recursive sub directories. false boolean download (consumer) Whether the FTP consumer should download the file. If this option is set to false, then the message body will be null, but the consumer will still trigger a Camel Exchange that has details about the file such as file name, file size, etc. It's just that the file will not be downloaded. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern ignoreFileNotFoundOr PermissionError (consumer) Whether to ignore when (trying to list files in directories or when downloading a file), which does not exist or due to permission error. By default when a directory or file does not exists or insufficient permission, then an exception is thrown. Setting this option to true allows to ignore that instead. false boolean inProgressRepository (consumer) A pluggable in-progress repository org.apache.camel.spi.IdempotentRepository. The in-progress repository is used to account the current in progress files being consumed. By default a memory based repository is used. IdempotentRepository localWorkDirectory (consumer) When consuming, a local work directory can be used to store the remote file content directly in local files, to avoid loading the content into memory. This is beneficial, if you consume a very big remote file and thus can conserve memory. String onCompletionException Handler (consumer) To use a custom org.apache.camel.spi.ExceptionHandler to handle any thrown exceptions that happens during the file on completion process where the consumer does either a commit or rollback. The default implementation will log any exception at WARN level and ignore. ExceptionHandler pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy processStrategy (consumer) A pluggable org.apache.camel.component.file.GenericFileProcessStrategy allowing you to implement your own readLock option or similar. Can also be used when special conditions must be met before a file can be consumed, such as a special ready file exists. If this option is set then the readLock option does not apply. GenericFileProcess Strategy startingDirectoryMustExist (consumer) Whether the starting directory must exist. Mind that the autoCreate option is default enabled, which means the starting directory is normally auto created if it doesn't exist. You can disable autoCreate and enable this to ensure the starting directory must exist. Will thrown an exception if the directory doesn't exist. false boolean useList (consumer) Whether to allow using LIST command when downloading a file. Default is true. In some use cases you may want to download a specific file and are not allowed to use the LIST command, and therefore you can set this option to false. Notice when using this option, then the specific file to download does not include meta-data information such as file size, timestamp, permissions etc, because those information is only possible to retrieve when LIST command is in use. true boolean fileExist (producer) What to do if a file already exists with the same name. Override, which is the default, replaces the existing file. Append - adds content to the existing file. Fail - throws a GenericFileOperationException, indicating that there is already an existing file. Ignore - silently ignores the problem and does not override the existing file, but assumes everything is okay. Move - option requires to use the moveExisting option to be configured as well. The option eagerDeleteTargetFile can be used to control what to do if an moving the file, and there exists already an existing file, otherwise causing the move operation to fail. The Move option will move any existing files, before writing the target file. TryRename is only applicable if tempFileName option is in use. This allows to try renaming the file from the temporary name to the actual name, without doing any exists check. This check may be faster on some file systems and especially FTP servers. Override GenericFileExist flatten (producer) Flatten is used to flatten the file name path to strip any leading paths, so it's just the file name. This allows you to consume recursively into sub-directories, but when you eg write the files to another directory they will be written in a single directory. Setting this to true on the producer enforces that any file name in CamelFileName header will be stripped for any leading paths. false boolean jailStartingDirectory (producer) Used for jailing (restricting) writing files to the starting directory (and sub) only. This is enabled by default to not allow Camel to write files to outside directories (to be more secured out of the box). You can turn this off to allow writing files to directories outside the starting directory, such as parent or root folders. true boolean moveExisting (producer) Expression (such as File Language) used to compute file name to use when fileExist=Move is configured. To move files into a backup subdirectory just enter backup. This option only supports the following File Language tokens: file:name, file:name.ext, file:name.noext, file:onlyname, file:onlyname.noext, file:ext, and file:parent. Notice the file:parent is not supported by the FTP component, as the FTP component can only move any existing files to a relative directory based on current dir as base. String tempFileName (producer) The same as tempPrefix option but offering a more fine grained control on the naming of the temporary filename as it uses the File Language. String tempPrefix (producer) This option is used to write the file using a temporary name and then, after the write is complete, rename it to the real name. Can be used to identify files being written and also avoid consumers (not using exclusive read locks) reading in progress files. Is often used by FTP when uploading big files. String allowNullBody (producer) Used to specify if a null body is allowed during file writing. If set to true then an empty file will be created, when set to false, and attempting to send a null body to the file component, a GenericFileWriteException of 'Cannot write null body to file.' will be thrown. If the fileExist option is set to 'Override', then the file will be truncated, and if set to append the file will remain unchanged. false boolean chmod (producer) Allows you to set chmod on the stored file. For example chmod=640. String disconnectOnBatchComplete (producer) Whether or not to disconnect from remote FTP server right after a Batch upload is complete. disconnectOnBatchComplete will only disconnect the current connection to the FTP server. false boolean eagerDeleteTargetFile (producer) Whether or not to eagerly delete any existing target file. This option only applies when you use fileExists=Override and the tempFileName option as well. You can use this to disable (set it to false) deleting the target file before the temp file is written. For example you may write big files and want the target file to exists during the temp file is being written. This ensure the target file is only deleted until the very last moment, just before the temp file is being renamed to the target filename. This option is also used to control whether to delete any existing files when fileExist=Move is enabled, and an existing file exists. If this option copyAndDeleteOnRenameFails false, then an exception will be thrown if an existing file existed, if its true, then the existing file is deleted before the move operation. true boolean keepLastModified (producer) Will keep the last modified timestamp from the source file (if any). Will use the Exchange.FILE_LAST_MODIFIED header to located the timestamp. This header can contain either a java.util.Date or long with the timestamp. If the timestamp exists and the option is enabled it will set this timestamp on the written file. Note: This option only applies to the file producer. You cannot use this option with any of the ftp producers. false boolean moveExistingFileStrategy (producer) Strategy (Custom Strategy) used to move file with special naming token to use when fileExist=Move is configured. By default, there is an implementation used if no custom strategy is provided FileMoveExisting Strategy sendNoop (producer) Whether to send a noop command as a pre-write check before uploading files to the FTP server. This is enabled by default as a validation of the connection is still valid, which allows to silently re-connect to be able to upload the file. However if this causes problems, you can turn this option off. true boolean autoCreate (advanced) Automatically create missing directories in the file's pathname. For the file consumer, that means creating the starting directory. For the file producer, it means the directory the files should be written to. true boolean bindAddress (advanced) Specifies the address of the local interface against which the connection should bind. String bufferSize (advanced) Write buffer sized in bytes. 131072 int bulkRequests (advanced) Specifies how many requests may be outstanding at any one time. Increasing this value may slightly improve file transfer speed but will increase memory usage. Integer compression (advanced) To use compression. Specify a level from 1 to 10. Important: You must manually add the needed JSCH zlib JAR to the classpath for compression support. int connectTimeout (advanced) Sets the connect timeout for waiting for a connection to be established Used by both FTPClient and JSCH 10000 int maximumReconnectAttempts (advanced) Specifies the maximum reconnect attempts Camel performs when it tries to connect to the remote FTP server. Use 0 to disable this behavior. int proxy (advanced) To use a custom configured com.jcraft.jsch.Proxy. This proxy is used to consume/send messages from the target SFTP host. Proxy reconnectDelay (advanced) Delay in millis Camel will wait before performing a reconnect attempt. long serverAliveCountMax (advanced) Allows you to set the serverAliveCountMax of the sftp session 1 int serverAliveInterval (advanced) Allows you to set the serverAliveInterval of the sftp session int soTimeout (advanced) Sets the so timeout Used only by FTPClient 300000 int stepwise (advanced) Sets whether we should stepwise change directories while traversing file structures when downloading files, or as well when uploading a file to a directory. You can disable this if you for example are in a situation where you cannot change directory on the FTP server due security reasons. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean throwExceptionOnConnect Failed (advanced) Should an exception be thrown if connection failed (exhausted) By default exception is not thrown and a WARN is logged. You can use this to enable exception being thrown and handle the thrown exception from the org.apache.camel.spi.PollingConsumerPollStrategy rollback method. false boolean antExclude (filter) Ant style filter exclusion. If both antInclude and antExclude are used, antExclude takes precedence over antInclude. Multiple exclusions may be specified in comma-delimited format. String antFilterCaseSensitive (filter) Sets case sensitive flag on ant filter true boolean antInclude (filter) Ant style filter inclusion. Multiple inclusions may be specified in comma-delimited format. String eagerMaxMessagesPerPoll (filter) Allows for controlling whether the limit from maxMessagesPerPoll is eager or not. If eager then the limit is during the scanning of files. Where as false would scan all files, and then perform sorting. Setting this option to false allows for sorting all files first, and then limit the poll. Mind that this requires a higher memory usage as all file details are in memory to perform the sorting. true boolean exclude (filter) Is used to exclude files, if filename matches the regex pattern (matching is case in-senstive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris String filter (filter) Pluggable filter as a org.apache.camel.component.file.GenericFileFilter class. Will skip files if filter returns false in its accept() method. GenericFileFilter filterDirectory (filter) Filters the directory based on Simple language. For example to filter on current date, you can use a simple date pattern such as USDdate:now:yyyMMdd String filterFile (filter) Filters the file based on Simple language. For example to filter on file size, you can use USDfile:size 5000 String idempotent (filter) Option to use the Idempotent Consumer EIP pattern to let Camel skip already processed files. Will by default use a memory based LRUCache that holds 1000 entries. If noop=true then idempotent will be enabled as well to avoid consuming the same files over and over again. false Boolean idempotentKey (filter) To use a custom idempotent key. By default the absolute path of the file is used. You can use the File Language, for example to use the file name and file size, you can do: idempotentKey=USDfile:name-USDfile:size String idempotentRepository (filter) A pluggable repository org.apache.camel.spi.IdempotentRepository which by default use MemoryMessageIdRepository if none is specified and idempotent is true. IdempotentRepository include (filter) Is used to include files, if filename matches the regex pattern (matching is case in-sensitive). Notice if you use symbols such as plus sign and others you would need to configure this using the RAW() syntax if configuring this as an endpoint uri. See more details at configuring endpoint uris String maxDepth (filter) The maximum depth to traverse when recursively processing a directory. 2147483647 int maxMessagesPerPoll (filter) To define a maximum messages to gather per poll. By default no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disabled it. Notice: If this option is in use then the File and FTP components will limit before any sorting. For example if you have 100000 files and use maxMessagesPerPoll=500, then only the first 500 files will be picked up, and then sorted. You can use the eagerMaxMessagesPerPoll option and set this to false to allow to scan all files first and then sort afterwards. int minDepth (filter) The minimum depth to start processing when recursively processing a directory. Using minDepth=1 means the base directory. Using minDepth=2 means the first sub directory. int move (filter) Expression (such as Simple Language) used to dynamically set the filename when moving it after processing. To move files into a .done subdirectory just enter .done. String exclusiveReadLockStrategy (lock) Pluggable read-lock as a org.apache.camel.component.file.GenericFileExclusiveReadLockStrategy implementation. GenericFileExclusive ReadLockStrategy readLock (lock) Used by consumer, to only poll the files if it has exclusive read-lock on the file (i.e. the file is not in-progress or being written). Camel will wait until the file lock is granted. This option provides the build in strategies: none - No read lock is in use markerFile - Camel creates a marker file (fileName.camelLock) and then holds a lock on it. This option is not available for the FTP component changed - Changed is using file length/modification timestamp to detect whether the file is currently being copied or not. Will at least use 1 sec to determine this, so this option cannot consume files as fast as the others, but can be more reliable as the JDK IO API cannot always determine whether a file is currently being used by another process. The option readLockCheckInterval can be used to set the check frequency. fileLock - is for using java.nio.channels.FileLock. This option is not avail on Windows or the FTP component. This approach should be avoided when accessing a remote file system via a mount/share unless that file system supports distributed file locks. rename - rename is for using a try to rename the file as a test if we can get exclusive read-lock. idempotent - (only for file component) idempotent is for using a idempotentRepository as the read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. idempotent-changed - (only for file component) idempotent-changed is for using a idempotentRepository and changed as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. idempotent-rename - (only for file component) idempotent-rename is for using a idempotentRepository and rename as the combined read-lock. This allows to use read locks that supports clustering if the idempotent repository implementation supports that. Notice: The various read locks is not all suited to work in clustered mode, where concurrent consumers on different nodes is competing for the same files on a shared file system. The markerFile using a close to atomic operation to create the empty marker file, but its not guaranteed to work in a cluster. The fileLock may work better but then the file system need to support distributed file locks, and so on. Using the idempotent read lock can support clustering if the idempotent repository supports clustering, such as Hazelcast Component or Infinispan. none String readLockCheckInterval (lock) Interval in millis for the read-lock, if supported by the read lock. This interval is used for sleeping between attempts to acquire the read lock. For example when using the changed read lock, you can set a higher interval period to cater for slow writes. The default of 1 sec. may be too fast if the producer is very slow writing the file. Notice: For FTP the default readLockCheckInterval is 5000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 1000 long readLockDeleteOrphanLock Files (lock) Whether or not read lock with marker files should upon startup delete any orphan read lock files, which may have been left on the file system, if Camel was not properly shutdown (such as a JVM crash). If turning this option to false then any orphaned lock file will cause Camel to not attempt to pickup that file, this could also be due another node is concurrently reading files from the same shared directory. true boolean readLockIdempotentRelease Async (lock) Whether the delayed release task should be synchronous or asynchronous. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockIdempotentRelease AsyncPoolSize (lock) The number of threads in the scheduled thread pool when using asynchronous release tasks. Using a default of 1 core threads should be sufficient in almost all use-cases, only set this to a higher value if either updating the idempotent repository is slow, or there are a lot of files to process. This option is not in-use if you use a shared thread pool by configuring the readLockIdempotentReleaseExecutorService option. See more details at the readLockIdempotentReleaseDelay option. int readLockIdempotentRelease Delay (lock) Whether to delay the release task for a period of millis. This can be used to delay the release tasks to expand the window when a file is regarded as read-locked, in an active/active cluster scenario with a shared idempotent repository, to ensure other nodes cannot potentially scan and acquire the same file, due to race-conditions. By expanding the time-window of the release tasks helps prevents these situations. Note delaying is only needed if you have configured readLockRemoveOnCommit to true. int readLockIdempotentRelease ExecutorService (lock) To use a custom and shared thread pool for asynchronous release tasks. See more details at the readLockIdempotentReleaseDelay option. ScheduledExecutor Service readLockLoggingLevel (lock) Logging level used when a read lock could not be acquired. By default a WARN is logged. You can change this level, for example to OFF to not have any logging. This option is only applicable for readLock of types: changed, fileLock, idempotent, idempotent-changed, idempotent-rename, rename. DEBUG LoggingLevel readLockMarkerFile (lock) Whether to use marker file with the changed, rename, or exclusive read lock types. By default a marker file is used as well to guard against other processes picking up the same files. This behavior can be turned off by setting this option to false. For example if you do not want to write marker files to the file systems by the Camel application. true boolean readLockMinAge (lock) This option is applied only for readLock=changed. It allows to specify a minimum age the file must be before attempting to acquire the read lock. For example use readLockMinAge=300s to require the file is at last 5 minutes old. This can speedup the changed read lock as it will only attempt to acquire files which are at least that given age. 0 long readLockMinLength (lock) This option is applied only for readLock=changed. It allows you to configure a minimum file length. By default Camel expects the file to contain data, and thus the default value is 1. You can set this option to zero, to allow consuming zero-length files. 1 long readLockRemoveOnCommit (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file is succeeded and a commit happens. By default the file is not removed which ensures that any race-condition do not occur so another active node may attempt to grab the file. Instead the idempotent repository may support eviction strategies that you can configure to evict the file name entry after X minutes - this ensures no problems with race conditions. See more details at the readLockIdempotentReleaseDelay option. false boolean readLockRemoveOnRollback (lock) This option is applied only for readLock=idempotent. It allows to specify whether to remove the file name entry from the idempotent repository when processing the file failed and a rollback happens. If this option is false, then the file name entry is confirmed (as if the file did a commit). true boolean readLockTimeout (lock) Optional timeout in millis for the read-lock, if supported by the read-lock. If the read-lock could not be granted and the timeout triggered, then Camel will skip the file. At poll Camel, will try the file again, and this time maybe the read-lock could be granted. Use a value of 0 or lower to indicate forever. Currently fileLock, changed and rename support the timeout. Notice: For FTP the default readLockTimeout value is 20000 instead of 10000. The readLockTimeout value must be higher than readLockCheckInterval, but a rule of thumb is to have a timeout that is at least 2 or more times higher than the readLockCheckInterval. This is needed to ensure that amble time is allowed for the read lock process to try to grab the lock before the timeout was hit. 10000 long backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean shuffle (sort) To shuffle the list of files (sort in random order) false boolean sortBy (sort) Built-in sort by using the File Language. Supports nested sorts, so you can have a sort by file name and as a 2nd group sort by modified date. String sorter (sort) Pluggable sorter as a java.util.Comparator class. Comparator ciphers (security) Set a comma separated list of ciphers that will be used in order of preference. Possible cipher names are defined by JCraft JSCH. Some examples include: aes128-ctr,aes128-cbc,3des-ctr,3des-cbc,blowfish-cbc,aes192-cbc,aes256-cbc. If not specified the default list from JSCH will be used. String keyPair (security) Sets a key pair of the public and private key so to that the SFTP endpoint can do public/private key verification. KeyPair knownHosts (security) Sets the known_hosts from the byte array, so that the SFTP endpoint can do host key verification. byte[] knownHostsFile (security) Sets the known_hosts file, so that the SFTP endpoint can do host key verification. String knownHostsUri (security) Sets the known_hosts file (loaded from classpath by default), so that the SFTP endpoint can do host key verification. String password (security) Password to use for login String preferredAuthentications (security) Set the preferred authentications which SFTP endpoint will used. Some example include:password,publickey. If not specified the default list from JSCH will be used. String privateKey (security) Set the private key as byte so that the SFTP endpoint can do private key verification. byte[] privateKeyFile (security) Set the private key file so that the SFTP endpoint can do private key verification. String privateKeyPassphrase (security) Set the private key file passphrase so that the SFTP endpoint can do private key verification. String privateKeyUri (security) Set the private key file (loaded from classpath by default) so that the SFTP endpoint can do private key verification. String strictHostKeyChecking (security) Sets whether to use strict host key checking. no String username (security) Username to use for login String useUserKnownHostsFile (security) If knownHostFile has not been explicit configured then use the host file from System.getProperty(user.home)/.ssh/known_hosts true boolean
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ftp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"sftp:host:port/directoryName"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/sftp-component
|
E.3.3. /proc/bus/pci
|
E.3.3. /proc/bus/pci Later versions of the 2.6 Linux kernel have obsoleted the /proc/pci directory in favor of the /proc/bus/pci directory. Although you can get a list of all PCI devices present on the system using the command cat /proc/bus/pci/devices , the output is difficult to read and interpret. For a human-readable list of PCI devices, run the following command: The output is a sorted list of all IRQ numbers and addresses as seen by the cards on the PCI bus instead of as seen by the kernel. Beyond providing the name and version of the device, this list also gives detailed IRQ information so an administrator can quickly look for conflicts.
|
[
"~]# /sbin/lspci -vb 00:00.0 Host bridge: Intel Corporation 82X38/X48 Express DRAM Controller Subsystem: Hewlett-Packard Company Device 1308 Flags: bus master, fast devsel, latency 0 Capabilities: [e0] Vendor Specific Information <?> Kernel driver in use: x38_edac Kernel modules: x38_edac 00:01.0 PCI bridge: Intel Corporation 82X38/X48 Express Host-Primary PCI Express Bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 00001000-00001fff Memory behind bridge: f0000000-f2ffffff Capabilities: [88] Subsystem: Hewlett-Packard Company Device 1308 Capabilities: [80] Power Management version 3 Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [a0] Express Root Port (Slot+), MSI 00 Capabilities: [100] Virtual Channel <?> Capabilities: [140] Root Complex Link <?> Kernel driver in use: pcieport Kernel modules: shpchp 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) (prog-if 00 [UHCI]) Subsystem: Hewlett-Packard Company Device 1308 Flags: bus master, medium devsel, latency 0, IRQ 5 I/O ports at 2100 Capabilities: [50] PCI Advanced Features Kernel driver in use: uhci_hcd [output truncated]"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-pci
|
Chapter 1. Dynamically provisioned OpenShift Data Foundation deployed on AWS
|
Chapter 1. Dynamically provisioned OpenShift Data Foundation deployed on AWS 1.1. Replacing operational or failed storage devices on AWS user-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an AWS user-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing an operational AWS node on user-provisioned infrastructure . Replacing a failed AWS node on user-provisioned infrastructure . 1.2. Replacing operational or failed storage devices on AWS installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an AWS installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing an operational AWS node on installer-provisioned infrastructure . Replacing a failed AWS node on installer-provisioned infrastructure .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/replacing_devices/dynamically_provisioned_openshift_data_foundation_deployed_on_aws
|
Chapter 3. Important Changes to External Kernel Parameters
|
Chapter 3. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 7.5. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. Kernel parameters amd_iommu_intr = [HW,X86-64] Specifies one of the following AMD IOMMU interrupt remapping modes. legacy - Use legacy interrupt remapping mode. vapic - Use virtual APIC mode, which allows IOMMU to inject interrupts directly into guest. This mode requires kvm-amd.avic=1 , which is default when IOMMU HW support is present. debug_pagealloc = [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this parameter enables the feature at boot time. It is disabled by default. To avoid allocating huge chunk of memory for debug pagealloc do not enable it at boot time, and the operating system will work similarly as with the kernel built without CONFIG_DEBUG_PAGEALLOC . Use debug_pagealloc = on to enable the feature. ftrace_graph_max_depth = uint [FTRACE] This parameter is used with the function graph tracer. It defines the maximum depth it will trace into a function. Its value can be changed at run time by the max_graph_depth file file in the tracefs tracing directory. The default values is 0, which means that no limit is set. init_pkru = [x86] Specifies the default memory protection keys rights register contents for all processes. The default value is 0x55555554, which disallows access to all but pkey 0. You can override the value in the debugfs file system after boot. nopku = [x86] Disables the Memory Protection Keys CPU feature found in some Intel CPUs. mem_encrypt = [X86-64] Provides AMD Secure Memory Encryption (SME) control. The valid arguments are: on, off. The default setting depends on kernel configuration option: on : CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=y off : CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=n mem_encrypt=on: Activate SME mem_encrypt=off: Do not activate SME Kernel parameters to mitigate Spectre and Meltdown issues kpti = [X86-64] Enables kernel page table isolation. nopti = [X86-64] Disables kernel page table isolation. nospectre_v2 = [X86] Disables all mitigations for the Spectre variant 2 (indirect branch speculation) vulnerability. The operating system may allow data leaks with this option, which is equivalent to spectre_v2=off. spectre_v2 = [X86] Controls mitigation of Spectre variant 2 (indirect branch speculation) vulnerability. The valid arguments are: on, off, auto. on: unconditionally enable off: unconditionally disable auto: kernel detects whether your CPU model is vulnerable Selecting on will, and auto may, choose a mitigation method at run time according to the CPU, the available microcode, the setting of the CONFIG_RETPOLINE configuration option, and the compiler with which the kernel was built. You can also select specific mitigations manually: retpoline: replaces indirect branches ibrs: Intel: Indirect Branch Restricted Speculation (kernel) ibrs_always: Intel: Indirect Branch Restricted Speculation (kernel and user space) Not specifying this option is equivalent to spectre_v2=auto. Updated /proc/sys/net/core entries dev_weight_rx_bias The RPS processing, for example RFS and aRFS , is competing with the registered NAPI poll function of the driver for the per softirq cycle netdev_budget . This parameter influences the proportion of the configured netdev_budget that is spent on RPS based packet processing during RX softirq cycles. It also makes current dev_weight adaptable for asymmetric CPU needs on receiving on transmitting side of the network stack. This parameter is effective on a per CPU basis. Determination is based on dev_weight , and it is calculated in multiplicative way (dev_weight * dev_weight_rx_bias). The default value is 1. dev_weight_tx_bias This parameter scales the maximum number of packets that can be processed during a TX softirq cycle. It is effective on a per CPU basis, and allows scaling of current dev_weight for asymmetric net stack processing needs. Make sure to avoid making TX softirq processing a CPU hog. Determination is based on dev_weight , and it is calculated in multiplicative way (dev_weight * dev_weight_rx_bias). The default value is 1.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/chap-red_hat_enterprise_linux-7.5_release_notes-kernel_parameters_changes
|
Chapter 8. Installing a cluster on GCP into a shared VPC
|
Chapter 8. Installing a cluster on GCP into a shared VPC In OpenShift Container Platform version 4.14, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation . The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You have a GCP host project which contains a shared VPC network. You configured a GCP project to host the cluster. This project, known as the service project, must be attached to the host project. For more information, see Attaching service projects in the GCP documentation . You have a GCP service account that has the required GCP permissions in both the host and service projects. 8.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.5. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) into a shared VPC, you must generate the install-config.yaml file and modify it so that the cluster uses the correct VPC networks, DNS zones, and project names. 8.5.1. Manually creating the installation configuration file You must manually create your installation configuration file when installing OpenShift Container Platform on GCP into a shared VPC using installer-provisioned infrastructure. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for GCP 8.5.2. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 8.5.3. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 8.5.4. Sample customized install-config.yaml file for shared VPC installation There are several configuration parameters which are required to install OpenShift Container Platform on GCP using a shared VPC. The following is a sample install-config.yaml file which demonstrates these fields. Important This sample YAML file is provided for reference only. You must modify this file with the correct values for your environment and cluster. apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 10 1 credentialsMode must be set to Passthrough or Manual . See the "Prerequisites" section for the required GCP permissions that your service account must have. 2 The name of the subnet in the shared VPC for compute machines to use. 3 The name of the subnet in the shared VPC for control plane machines to use. 4 The name of the shared VPC. 5 The name of the host project where the shared VPC exists. 6 The name of the GCP project where you want to install the cluster. 7 8 9 Optional. One or more network tags to apply to compute machines, control plane machines, or all machines. 10 You can optionally provide the sshKey value that you use to access the machines in your cluster. 8.5.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 8.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 8.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 8.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 8.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 8.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 8.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA... 10",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_gcp/installing-gcp-shared-vpc
|
Chapter 4. Managing IDE extensions
|
Chapter 4. Managing IDE extensions IDEs use extensions or plugins to extend their functionality, and the mechanism for managing extensions differs between IDEs. Section 4.1, "Extensions for Microsoft Visual Studio Code - Open Source" 4.1. Extensions for Microsoft Visual Studio Code - Open Source To manage extensions, this IDE uses one of these Open VSX registry instances: The embedded instance of the Open VSX registry that runs in the plugin-registry pod of OpenShift Dev Spaces to support air-gapped, offline, and proxy-restricted environments. The embedded Open VSX registry contains only a subset of the extensions published on open-vsx.org . This subset is customizable . The public open-vsx.org registry that is accessed over the internet. A standalone Open VSX registry instance that is deployed on a network accessible from OpenShift Dev Spaces workspace pods. The default is the embedded instance of the Open VSX registry. 4.1.1. Selecting an Open VSX registry instance The default is the embedded instance of the Open VSX registry. If the default Open VSX registry instance is not what you need, you can select one of the following instances: The Open VSX registry instance at https://open-vsx.org that requires access to the internet. A standalone Open VSX registry instance that is deployed on a network accessible from OpenShift Dev Spaces workspace pods. Procedure Edit the openVSXURL value in the CheCluster custom resource: spec: components: pluginRegistry: openVSXURL: " <url_of_an_open_vsx_registry_instance> " 1 1 For example: openVSXURL: "https://open-vsx.org" . Tip To select the embedded Open VSX registry instance in the plugin-registry pod, use openVSXURL: '' . You can customize the list of included extensions . You can also point openVSXURL at the URL of a standalone Open VSX registry instance if its URL is accessible from within your organization's cluster and not blocked by a proxy. 4.1.2. Adding or removing extensions in the embedded Open VSX registry instance You can add or remove extensions in the embedded Open VSX registry instance. This results in a custom build of the Open VSX registry that can be used in your organization's workspaces. Tip To get the latest security fixes after a OpenShift Dev Spaces update, rebuild your container based on the latest tag or SHA. Procedure Get the publisher and extension name of each chosen extension: Find the extension on the Open VSX registry website and copy the URL of the extension's listing page and extension's version. Extract the <publisher> and <extension> name from the copied URL: Tip If the extension is only available from Microsoft Visual Studio Marketplace , but not Open VSX , you can ask the extension publisher to also publish it on open-vsx.org according to these instructions , potentially using this GitHub action . If the extension publisher is unavailable or unwilling to publish the extension to open-vsx.org , and if there is no Open VSX equivalent of the extension, consider reporting an issue to the Open VSX team. Build the custom plugin registry image and update CheCluster custom resource: Tip During the build process, each extension will be verified for compatibility with the version of Visual Studio Code used in OpenShift Dev Spaces. Using OpenShift Dev Spaces instance: Login to your OpenShift Dev Spaces instance as an administrator. Create a new Red Hat Registry Service Account and copy username and token. Start a workspace using the plugin registry repository . Open a terminal and check out the Git tag that corresponds to your OpenShift Dev Spaces version (e.g., devspaces-3.15-rhel-8 ): Open the openvsx-sync.json file and add or remove extensions. Execute 1. Login to registry.redhat.io task in the workspace (Terminal Run Task... devfile 1. Login to registry.redhat.io) and login to registry.redhat.io . Execute 2. Build and Publish a Custom Plugin Registry task in the workspace (Terminal Run Task... devfile 2. Build and Publish a Custom Plugin Registry). Execute 3. Configure Che to use the Custom Plugin Registry task in the workspace (Terminal Run Task... devfile 3. Configure Che to use the Custom Plugin Registry). Using Linux operating system: Tip Podman and NodeJS version 18.20.3 or higher should be installed in the system. Download or fork and clone the Dev Spaces repository . + Go to the plugin registry submodule: + Checkout the tag that corresponds to your OpenShift Dev Spaces version (e.g., devspaces-3.15-rhel-8 ): Create a new Red Hat Registry Service Account and copy username and token. Login to registry.redhat.io : For each extension that you need to add or remove, edit the openvsx-sync.json file : To add extensions, add the publisher, name and extension version to the openvsx-sync.json file. To remove extensions, remove the publisher, name and extension version from the openvsx-sync.json file. Use the following JSON syntax: { "id": " <publisher> . <name> ", "version": " <extension_version> " } Tip If you have a closed-source extension or an extension developed only for internal use in your organization, you can add the extension directly from a .vsix file by using a URL accessible to your custom plugin registry container: { "id": " <publisher> . <name> ", "download": " <url_to_download_vsix_file> ", "version": " <extension_version> " } Read the
|
[
"spec: components: pluginRegistry: openVSXURL: \" <url_of_an_open_vsx_registry_instance> \" 1",
"https://open-vsx.org/extension/ <publisher> / <name>",
"git checkout devspaces-USDPRODUCT_VERSION-rhel-8",
"git clone https://github.com/redhat-developer/devspaces.git",
"cd devspaces/dependencies/che-plugin-registry/",
"git checkout devspaces-USDPRODUCT_VERSION-rhel-8",
"login registry.redhat.io",
"{ \"id\": \" <publisher> . <name> \", \"version\": \" <extension_version> \" }",
"{ \"id\": \" <publisher> . <name> \", \"download\": \" <url_to_download_vsix_file> \", \"version\": \" <extension_version> \" }",
"./build.sh -o <username> -r quay.io -t custom",
"podman push quay.io/ <username/plugin_registry:custom>",
"spec: components: pluginRegistry: deployment: containers: - image: quay.io/ <username/plugin_registry:custom> openVSXURL: ''",
"\"trustedExtensionAuthAccess\": [ \"<publisher1>.<extension1>\", \"<publisher2>.<extension2>\" ]",
"env: - name: VSCODE_TRUSTED_EXTENSIONS value: \"<publisher1>.<extension1>,<publisher2>.<extension2>\"",
"kind: ConfigMap apiVersion: v1 metadata: name: trusted-extensions labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: VSCODE_TRUSTED_EXTENSIONS: '<publisher1>.<extension1>,<publisher2>.<extension2>'"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/administration_guide/managing-ide-extensions
|
Network APIs
|
Network APIs OpenShift Container Platform 4.13 Reference guide for network APIs Red Hat OpenShift Documentation Team
|
[
"Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]",
"Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]",
"{ Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }",
"a: [ 10.10.1.1:8675, 10.10.2.2:8675 ], b: [ 10.10.1.1:309, 10.10.2.2:309 ]"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/network_apis/index
|
Chapter 13. Configuring and setting up remote jobs
|
Chapter 13. Configuring and setting up remote jobs Red Hat Satellite supports remote execution of commands on hosts. Using remote execution, you can perform various tasks on multiple hosts simultaneously. 13.1. Remote execution in Red Hat Satellite With remote execution, you can run jobs on hosts from Capsules by using shell scripts or Ansible roles and playbooks. Use remote execution for the following benefits in Satellite: Run jobs on multiple hosts at once. Use variables in your commands for more granular control over the jobs you run. Use host facts and parameters to populate the variable values. Specify custom values for templates when you run the command. Communication for remote execution occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. To use remote execution, you must define a job template. A job template is a command that you want to apply to remote hosts. You can execute a job template multiple times. Satellite uses ERB syntax job templates. For more information, see Appendix B, Template writing reference . By default, Satellite includes several job templates for shell scripts and Ansible. For more information, see Setting up Job Templates in Managing hosts . Additional resources See Executing a Remote Job in Managing hosts . 13.2. Remote execution workflow For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on your Capsule Server. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed. When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use. Satellite searches only for Capsules that have the remote execution feature enabled. Satellite finds the host's interfaces that have the Remote execution checkbox selected. Satellite finds the subnets of these interfaces. Satellite finds remote execution Capsules assigned to these subnets. From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules. If you have enabled Prefer registered through Capsule for remote execution , Satellite runs the REX job by using the Capsule to which the host is registered. By default, Prefer registered through Capsule for remote execution is set to No . To enable it, in the Satellite web UI, navigate to Administer > Settings , and on the Content tab, set Prefer registered through Capsule for remote execution to Yes . This ensures that Satellite performs REX jobs on hosts by the Capsule to which they are registered to. If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. 13.3. Permissions for remote execution You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles: Remote Execution Manager : Can access all remote execution features and functionality. Remote Execution User : Can only run jobs. You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or Capsules are visible to the role. The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. You can run remote execution jobs against Red Hat Satellite and Capsule registered as hosts to Red Hat Satellite with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered Red Hat Satellite and Capsule hosts. For more information on working with roles and permissions, see Creating and Managing Roles in Administering Red Hat Satellite . The following example shows filters for the execute_template_invocation permission: Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com . Use the third line to bind the template with a host group. Note Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. 13.4. Transport modes for remote execution You can configure your Satellite to use two different modes of transport for remote job execution. You can configure single Capsule to use either one mode or the other but not both. Push-based transport On Capsules in ssh mode, remote execution uses the SSH service to transport job details. This is the default transport mode. The SSH service must be enabled and active on the target hosts. The remote execution Capsule must have access to the SSH port on the target hosts. Unless you have a different setting, the standard SSH port is 22. This transport mode supports both Script and Ansible providers. Pull-based transport On Capsules in pull-mqtt mode, remote execution uses Message Queueing Telemetry Transport (MQTT) to initiate the job execution it receives from Satellite Server. The host subscribes to the MQTT broker on Capsule for job notifications by using the yggdrasil pull client. After the host receives a notification from the MQTT broker, it pulls job details from Capsule over HTTPS, runs the job, and reports results back to Capsule. This transport mode supports the Script provider only. To use the pull-mqtt mode, you must enable it on Capsule Server and configure the pull client on hosts. Note If your Capsule already uses the pull-mqtt mode and you want to switch back to the ssh mode, run this satellite-installer command: Additional resources To enable pull mode on Capsule Server, see Configuring pull-based transport for remote execution in Installing Capsule Server . To enable pull mode on a registered host, continue with Section 13.5, "Configuring a host to use the pull client" . To enable pull mode on a new host, continue with the following: Section 2.1, "Creating a host in Red Hat Satellite" Section 4.3, "Registering hosts by using global registration" 13.5. Configuring a host to use the pull client For Capsules configured to use pull-mqtt mode, hosts can subscribe to remote jobs using the remote execution pull client. Hosts do not require an SSH connection from their Capsule Server. Prerequisites You have registered the host to Satellite. The Capsule through which the host is registered is configured to use pull-mqtt mode. For more information, see Configuring pull-based transport for remote execution in Installing Capsule Server . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . The host can communicate with its Capsule over MQTT using port 1883 . The host can communicate with its Capsule over HTTPS. Procedure Install the katello-pull-transport-migrate package on your host: On Red Hat Enterprise Linux 9 and Red Hat Enterprise Linux 8 hosts: On Red Hat Enterprise Linux 7 hosts: The package installs foreman_ygg_worker and yggdrasil as dependencies, configures the yggdrasil client, and starts the pull client worker on the host. Verification Check the status of the yggdrasild service: 13.6. Creating a job template Use this procedure to create a job template. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Templates > Job templates . Click New Job Template . Click the Template tab, and in the Name field, enter a unique name for your job template. Select Default to make the template available for all organizations and locations. Create the template directly in the template editor or upload it from a text file by clicking Import . Optional: In the Audit Comment field, add information about the change. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories in Managing hosts . Optional: In the Description Format field, enter a description template. For example, Install package %{package_name} . You can also use %{template_name} and %{job_category} in your template. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab. Optional: Click Foreign input set to include other templates in this job. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet . Optional: If you use the Ansible provider, click the Ansible tab. Select Enable Ansible Callback to allow hosts to send facts, which are used to create configuration reports, back to Satellite after a job finishes. Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. You can extend and customize job templates by including other templates in the template syntax. For more information, see Template Writing Reference and Job Template Examples and Extensions in Managing hosts . CLI procedure To create a job template using a template-definition file, enter the following command: 13.7. Importing an Ansible Playbook by name You can import Ansible Playbooks by name to Satellite from collections installed on Capsule. Satellite creates a job template from the imported playbook and places the template in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Fetch the available Ansible Playbooks by using the following API request: Select the Ansible Playbook you want to import and note its name. Import the Ansible Playbook by its name: You get a notification in the Satellite web UI after the import completes. steps You can run the playbook by executing a remote job from the created job template. For more information, see Section 13.22, "Executing a remote job" . 13.8. Importing all available Ansible Playbooks You can import all the available Ansible Playbooks to Satellite from collections installed on Capsule. Satellite creates job templates from the imported playbooks and places the templates in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Import the Ansible Playbooks by using the following API request: You get a notification in the Satellite web UI after the import completes. steps You can run the playbooks by executing a remote job from the created job templates. For more information, see Section 13.22, "Executing a remote job" . 13.9. Configuring the fallback to any Capsule remote execution setting in Satellite You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled. If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Fallback to Any Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Fallback to Any Capsule setting. To set the value to true , enter the following command: 13.10. Configuring the global Capsule remote execution setting in Satellite By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets. If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Enable Global Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Enable Global Capsule setting. To set the value to true , enter the following command: 13.11. Setting an alternative directory for remote execution jobs in push mode By default, Satellite uses the /var/tmp directory on hosts for remote execution jobs in push mode. If the /var/tmp directory on your host is mounted with the noexec flag, Satellite cannot execute remote execution job scripts in this directory. You can use satellite-installer to set an alternative directory for executing remote execution jobs in push mode. Procedure On your host, create a new directory: Copy the SELinux context from the default /var/tmp directory: Configure your Satellite Server or Capsule Server to use the new directory: 13.12. Setting an alternative directory for remote execution jobs in pull mode By default, Satellite uses the /run directory on hosts for remote execution jobs in pull mode. If the /run directory on your host is mounted with the noexec flag, Satellite cannot execute remote execution job scripts in this directory. You can use the yggdrasild service to set an alternative directory for executing remote execution jobs in pull mode. Procedure On your host, perform these steps: Create a new directory: Access the yggdrasild service configuration: Specify the alternative directory by adding the following line to the configuration: Restart the yggdrasild service: 13.13. Altering the privilege elevation method By default, push-based remote execution uses sudo to switch from the SSH user to the effective user that executes the script on your host. In some situations, you might require to use another method, such as su or dzdo . You can globally configure an alternative method in your Satellite settings. Prerequisites Your user account has a role assigned that grants the view_settings and edit_settings permissions. If you want to use dzdo for Ansible jobs, ensure the community.general Ansible collection, which contains the required dzdo become plugin, is installed. For more information, see Installing collections in Ansible documentation . Procedure Navigate to Administer > Settings . Select the Remote Execution tab. Click the value of the Effective User Method setting. Select the new value. Click Submit . 13.14. Distributing SSH keys for remote execution For Capsules in ssh mode, remote execution connections are authenticated using SSH. The public SSH key from Capsule must be distributed to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22. Use one of the following methods to distribute the public SSH key from Capsule to target hosts: Section 13.15, "Distributing SSH keys for remote execution manually" . Section 13.17, "Using the Satellite API to obtain SSH keys for remote execution" . Section 13.18, "Configuring a Kickstart template to distribute SSH keys during provisioning" . For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template in Managing hosts . Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default. If the hosts are running on Amazon Web Services, enable password authentication. For more information, see New User Accounts . 13.15. Distributing SSH keys for remote execution manually To distribute SSH keys manually, complete the following steps: Procedure Copy the SSH pub key from your Capsule to your target host: Repeat this step for each target host you want to manage. Verification To confirm that the key was successfully copied to the target host, enter the following command on Capsule: 13.16. Adding a passphrase to SSH key used for remote execution By default, Capsule uses a non-passphrase protected SSH key to execute remote jobs on hosts. You can protect the SSH key with a passphrase by following this procedure. Procedure On your Satellite Server or Capsule Server, use ssh-keygen to add a passphrase to your SSH key: steps Users now must use a passphrase when running remote execution jobs on hosts. 13.17. Using the Satellite API to obtain SSH keys for remote execution To use the Satellite API to download the public key from Capsule, complete this procedure on each target host. Procedure On the target host, create the ~/.ssh directory to store the SSH key: Download the SSH key from Capsule: Configure permissions for the ~/.ssh directory: Configure permissions for the authorized_keys file: 13.18. Configuring a Kickstart template to distribute SSH keys during provisioning You can add a remote_execution_ssh_keys snippet to your custom Kickstart template to deploy SSH keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Satellite copies the SSH key for remote execution to the systems during provisioning. Procedure To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use: 13.19. Configuring a keytab for Kerberos ticket granting tickets Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets. Procedure Find the ID of the foreman-proxy user: Modify the umask value so that new files have the permissions 600 : Create the directory for the keytab: Create a keytab or copy an existing keytab to the directory: Change the directory owner to the foreman-proxy user: Ensure that the keytab file is read-only: Restore the SELinux context: 13.20. Configuring Kerberos authentication for remote execution You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts. Prerequisites Enroll Satellite Server on the Kerberos server Enroll the Satellite target host on the Kerberos server Configure and initialize a Kerberos user account for remote execution Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket Procedure To install and enable Kerberos authentication for remote execution, enter the following command: To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account. Verification To confirm that Kerberos authentication is ready to use, run a remote job on the host. For more information, see Executing a Remote Job in Managing hosts . 13.21. Setting up job templates Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Templates > Job templates . If you want to use a template without making changes, proceed to Executing a Remote Job in Managing hosts . You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone. Procedure To clone a template, in the Actions column, select Clone . Enter a unique name for the clone and click Submit to save the changes. Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in Managing hosts . Ansible considerations To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with --- . You can embed an Ansible Playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible Playbooks in Satellite. For more information, see Synchronizing Repository Templates in Managing hosts . Parameter variables At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host's edit page can be used as input parameters for job templates. 13.22. Executing a remote job You can execute a job that is based on a job template against one or more hosts. Note Ansible jobs run in batches on multiple hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible Playbook runs on all hosts in the batch. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Monitor > Jobs and click Run job . Select the Job category and the Job template you want to use, then click . Select hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context. Note If you want to select a host group and all of its subgroups, it is not sufficient to select the host group as the job would only run on hosts directly in that group and not on hosts in subgroups. Instead, you must either select the host group and all of its subgroups or use this search query: Replace My_Host_Group with the name of the top-level host group. If required, provide inputs for the job template. Different templates have different inputs and some templates do not have any inputs. After entering all the required inputs, click . Optional: To configure advanced settings for the job, fill in the Advanced fields . To learn more about advanced settings, see Section 13.23, "Advanced settings in the job wizard" . Click . Schedule time for the job. To execute the job immediately, keep the pre-selected Immediate execution . To execute the job in future time, select Future execution . To execute the job on regular basis, select Recurring execution . Optional: If you selected future or recurring execution, select the Query type , otherwise click . Static query means that job executes on the exact list of hosts that you provided. Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter. Click after you have selected the query type. Optional: If you selected future or recurring execution, provide additional details: For Future execution , enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled. For Recurring execution , select the start date and time, frequency, and the condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time. Click after you have entered the required information. Review job details. You have the option to return to any part of the job wizard and edit the information. Click Submit to schedule the job for execution. CLI procedure Enter the following command on Satellite: Find the ID of the job template you want to use: Show the template details to see parameters required by your template: Execute a remote job with custom parameters: Replace My_Search_Query with the filter expression that defines hosts, for example "name ~ My_Pattern " . Additional resources For more information about creating, monitoring, or canceling remote jobs with Hammer CLI, enter hammer job-template --help and hammer job-invocation --help . 13.23. Advanced settings in the job wizard Some job templates require you to enter advanced settings. Some of the advanced settings are only visible to certain job templates. Below is the list of general advanced settings. SSH user A user to be used for connecting to the host through SSH. Effective user A user to be used for executing the job. By default it is the SSH user. If it differs from the SSH user, su or sudo, depending on your settings, is used to switch the accounts. If you set an effective user in the advanced settings, Ansible sets ansible_become_user to your input value and ansible_become to true . This means that if you use the parameters become: true and become_user: My_User within a playbook, these will be overwritten by Satellite. If your SSH user and effective user are identical, Satellite does not overwrite the become_user . Therefore, you can set a custom become_user in your Ansible Playbook. Description A description template for the job. Timeout to kill Time in seconds from the start of the job after which the job should be killed if it is not finished already. Time to pickup Time in seconds after which the job is canceled if it is not picked up by a client. This setting only applies to hosts using pull-mqtt transport. Password Is used if SSH authentication method is a password instead of the SSH key. Private key passphrase Is used if SSH keys are protected by a passphrase. Effective user password Is used if effective user is different from the ssh user. Concurrency level Defines the maximum number of jobs executed at once. This can prevent overload of system resources in a case of executing the job on a large number of hosts. Execution ordering Determines the order in which the job is executed on hosts. It can be alphabetical or randomized. 13.24. Using extended cron lines When scheduling a cron job with remote execution, you can use an extended cron line to specify the cadence of the job. The standard cron line contains five fields that specify minute, hour, day of the month, month, and day of the week. For example, 0 5 * * * stands for every day at 5 AM. The extended cron line provides the following features: You can use # to specify a concrete week day in a month For example: 0 0 * * mon#1 specifies first Monday of the month 0 0 * * fri#3,fri#4 specifies 3rd and 4th Fridays of the month 0 7 * * fri#-1 specifies the last Friday of the month at 07:00 0 7 * * fri#L also specifies the last Friday of the month at 07:00 0 23 * * mon#2,tue specifies the 2nd Monday of the month and every Tuesday, at 23:00 You can use % to specify every n-th day of the month For example: 9 0 * * sun%2 specifies every other Sunday at 00:09 0 0 * * sun%2+1 specifies every odd Sunday 9 0 * * sun%2,tue%3 specifies every other Sunday and every third Tuesday You can use & to specify that the day of the month has to match the day of the week For example: 0 0 30 * 1& specifies 30th day of the month, but only if it is Monday 13.25. Scheduling a recurring Ansible job for a host You can schedule a recurring job to run Ansible roles on hosts. Prerequisites Ensure you have the view_foreman_tasks , view_job_invocations , and view_recurring_logics permissions. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job. On the Ansible tab, select Jobs . Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs . 13.26. Scheduling a recurring Ansible job for a host group You can schedule a recurring job to run Ansible roles on host groups. Procedure In the Satellite web UI, navigate to Configure > Host groups . In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for. Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . 13.27. Using Ansible provider for package and errata actions By default, Satellite is configured to use the Script provider templates for remote execution jobs. If you prefer using Ansible job templates for your remote jobs, you can configure Satellite to use them by default for remote execution features associated with them. Note Remember that Ansible job templates only work when remote execution is configured for ssh mode. Procedure In the Satellite web UI, navigate to Administer > Remote Execution Features . Find each feature whose name contains by_search . Change the job template for these features from Katello Script Default to Katello Ansible Default . Click Submit . Satellite now uses Ansible provider templates for remote execution jobs by which you can perform package and errata actions. This applies to job invocations from the Satellite web UI as well as by using hammer job-invocation create with the same remote execution features that you have changed. 13.28. Setting the job rate limit on Capsule You can limit the maximum number of active jobs on a Capsule at a time to prevent performance spikes. The job is active from the time Capsule first tries to notify the host about the job until the job is finished on the host. The job rate limit only applies to mqtt based jobs. Note The optimal maximum number of active jobs depends on the computing resources of your Capsule Server. By default, the maximum number of active jobs is unlimited. Procedure Set the maximum number of active jobs using satellite-installer : For example:
|
[
"name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh",
"dnf install katello-pull-transport-migrate",
"yum install katello-pull-transport-migrate",
"systemctl status yggdrasild",
"hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH",
"curl --header 'Content-Type: application/json' --request GET https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_Capsule_ID",
"curl --data '{ \"playbook_names\": [\" My_Playbook_Name \"] }' --header 'Content-Type: application/json' --request PUT https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_Capsule_ID",
"curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_Capsule_ID",
"hammer settings set --name=remote_execution_fallback_proxy --value=true",
"hammer settings set --name=remote_execution_global_proxy --value=true",
"mkdir /My_Remote_Working_Directory",
"chcon --reference=/var/tmp /My_Remote_Working_Directory",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-remote-working-dir /My_Remote_Working_Directory",
"mkdir /My_Remote_Working_Directory",
"systemctl edit yggdrasild",
"Environment=FOREMAN_YGG_WORKER_WORKDIR= /My_Remote_Working_Directory",
"systemctl restart yggdrasild",
"ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]",
"ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]",
"ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy",
"mkdir ~/.ssh",
"curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys",
"chmod 700 ~/.ssh",
"chmod 600 ~/.ssh/authorized_keys",
"<%= snippet 'remote_execution_ssh_keys' %>",
"id -u foreman-proxy",
"umask 077",
"mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"",
"cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab",
"chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"",
"chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"",
"restorecon -RvF /var/kerberos/krb5",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true",
"hostgroup_fullname ~ \" My_Host_Group *\"",
"hammer settings set --name=remote_execution_global_proxy --value=false",
"hammer job-template list",
"hammer job-template info --id My_Template_ID",
"hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/Configuring_and_Setting_Up_Remote_Jobs_managing-hosts
|
Managing compliance with Enterprise Contract
|
Managing compliance with Enterprise Contract Red Hat Trusted Application Pipeline 1.0 Learn how Enterprise Contract enables you to better verify and govern compliance of the code you promote. Additionally, customize the sample policies to fit your corporate standards. Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/managing_compliance_with_enterprise_contract/index
|
Chapter 16. InsightsOperator [operator.openshift.io/v1]
|
Chapter 16. InsightsOperator [operator.openshift.io/v1] Description InsightsOperator holds cluster-wide information about the Insights Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 16.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Insights. status object status is the most recently observed status of the Insights operator. 16.1.1. .spec Description spec is the specification of the desired behavior of the Insights. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 16.1.2. .status Description status is the most recently observed status of the Insights operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. gatherStatus object gatherStatus provides basic information about the last Insights data gathering. When omitted, this means no data gathering has taken place yet. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. insightsReport object insightsReport provides general Insights analysis results. When omitted, this means no data gathering has taken place yet. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 16.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 16.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 16.1.5. .status.gatherStatus Description gatherStatus provides basic information about the last Insights data gathering. When omitted, this means no data gathering has taken place yet. Type object Property Type Description gatherers array gatherers is a list of active gatherers (and their statuses) in the last gathering. gatherers[] object gathererStatus represents information about a particular data gatherer. lastGatherDuration string lastGatherDuration is the total time taken to process all gatherers during the last gather event. lastGatherTime string lastGatherTime is the last time when Insights data gathering finished. An empty value means that no data has been gathered yet. 16.1.6. .status.gatherStatus.gatherers Description gatherers is a list of active gatherers (and their statuses) in the last gathering. Type array 16.1.7. .status.gatherStatus.gatherers[] Description gathererStatus represents information about a particular data gatherer. Type object Required conditions lastGatherDuration name Property Type Description conditions array conditions provide details on the status of each gatherer. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastGatherDuration string lastGatherDuration represents the time spent gathering. name string name is the name of the gatherer. 16.1.8. .status.gatherStatus.gatherers[].conditions Description conditions provide details on the status of each gatherer. Type array 16.1.9. .status.gatherStatus.gatherers[].conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 16.1.10. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 16.1.11. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 16.1.12. .status.insightsReport Description insightsReport provides general Insights analysis results. When omitted, this means no data gathering has taken place yet. Type object Property Type Description healthChecks array healthChecks provides basic information about active Insights health checks in a cluster. healthChecks[] object healthCheck represents an Insights health check attributes. 16.1.13. .status.insightsReport.healthChecks Description healthChecks provides basic information about active Insights health checks in a cluster. Type array 16.1.14. .status.insightsReport.healthChecks[] Description healthCheck represents an Insights health check attributes. Type object Required advisorURI description state totalRisk Property Type Description advisorURI string advisorURI provides the URL link to the Insights Advisor. description string description provides basic description of the healtcheck. state string state determines what the current state of the health check is. Health check is enabled by default and can be disabled by the user in the Insights advisor user interface. totalRisk integer totalRisk of the healthcheck. Indicator of the total risk posed by the detected issue; combination of impact and likelihood. The values can be from 1 to 4, and the higher the number, the more important the issue. 16.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/insightsoperators DELETE : delete collection of InsightsOperator GET : list objects of kind InsightsOperator POST : create an InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name} DELETE : delete an InsightsOperator GET : read the specified InsightsOperator PATCH : partially update the specified InsightsOperator PUT : replace the specified InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name}/scale GET : read scale of the specified InsightsOperator PATCH : partially update scale of the specified InsightsOperator PUT : replace scale of the specified InsightsOperator /apis/operator.openshift.io/v1/insightsoperators/{name}/status GET : read status of the specified InsightsOperator PATCH : partially update status of the specified InsightsOperator PUT : replace status of the specified InsightsOperator 16.2.1. /apis/operator.openshift.io/v1/insightsoperators Table 16.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of InsightsOperator Table 16.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 16.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind InsightsOperator Table 16.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 16.5. HTTP responses HTTP code Reponse body 200 - OK InsightsOperatorList schema 401 - Unauthorized Empty HTTP method POST Description create an InsightsOperator Table 16.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.7. Body parameters Parameter Type Description body InsightsOperator schema Table 16.8. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 202 - Accepted InsightsOperator schema 401 - Unauthorized Empty 16.2.2. /apis/operator.openshift.io/v1/insightsoperators/{name} Table 16.9. Global path parameters Parameter Type Description name string name of the InsightsOperator Table 16.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an InsightsOperator Table 16.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 16.12. Body parameters Parameter Type Description body DeleteOptions schema Table 16.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified InsightsOperator Table 16.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 16.15. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified InsightsOperator Table 16.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.17. Body parameters Parameter Type Description body Patch schema Table 16.18. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified InsightsOperator Table 16.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.20. Body parameters Parameter Type Description body InsightsOperator schema Table 16.21. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 401 - Unauthorized Empty 16.2.3. /apis/operator.openshift.io/v1/insightsoperators/{name}/scale Table 16.22. Global path parameters Parameter Type Description name string name of the InsightsOperator Table 16.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified InsightsOperator Table 16.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 16.25. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified InsightsOperator Table 16.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.27. Body parameters Parameter Type Description body Patch schema Table 16.28. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified InsightsOperator Table 16.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.30. Body parameters Parameter Type Description body Scale schema Table 16.31. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 16.2.4. /apis/operator.openshift.io/v1/insightsoperators/{name}/status Table 16.32. Global path parameters Parameter Type Description name string name of the InsightsOperator Table 16.33. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified InsightsOperator Table 16.34. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 16.35. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified InsightsOperator Table 16.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.37. Body parameters Parameter Type Description body Patch schema Table 16.38. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified InsightsOperator Table 16.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 16.40. Body parameters Parameter Type Description body InsightsOperator schema Table 16.41. HTTP responses HTTP code Reponse body 200 - OK InsightsOperator schema 201 - Created InsightsOperator schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/insightsoperator-operator-openshift-io-v1
|
Chapter 8. Managing compliance policies
|
Chapter 8. Managing compliance policies A compliance policy is a scheduled audit that checks the specified hosts for compliance against a specific XCCDF profile from a SCAP content. You specify the schedule for scans on Satellite Server and the scans are performed on hosts. When a scan completes, a report in ARF format is generated and uploaded to Satellite Server. The compliance policy makes no changes to the scanned host. A compliance policy defines a SCAP client configuration and a cron schedule. The policy is then deployed together with the SCAP client on hosts to which the policy is assigned. 8.1. Creating a compliance policy By creating a compliance policy, you can define and plan your security compliance requirements, and ensure that your hosts remain compliant to your security policies. Prerequisites You have configured Satellite for your selected compliance policy deployment method . You have available SCAP contents, and eventually tailoring files, in Satellite. To verify what SCAP contents are available, see Chapter 6, Listing available SCAP contents . To upload SCAP contents and tailoring files, see Chapter 7, Configuring SCAP contents . Your user account has a role assigned that has the view_policies and create_policies permissions. Procedure In the Satellite web UI, navigate to Hosts > Compliance > Policies . Click New Policy or New Compliance Policy . Select the deployment method: Ansible , Puppet , or Manual . Then click . Enter a name for this policy, a description (optional), then click . Select the SCAP Content and XCCDF Profile to be applied, then click . Note that Satellite does not detect whether the selected XCCDF profile contains any rules. An empty XCCDF profile, such as the Default XCCDF Profile , will return empty reports. Optional: To customize the XCCDF profile, select a Tailoring File and a XCCDF Profile in Tailoring File , then click . Specify the scheduled time when the policy is to be applied. Select Weekly , Monthly , or Custom from the Period list. The Custom option allows for greater flexibility in the policy's schedule. If you select Weekly , also select the desired day of the week from the Weekday list. If you select Monthly , also specify the desired day of the month in the Day of month field. If you select Custom , enter a valid Cron expression in the Cron line field. Select the locations to which to apply the policy, then click . Select the organizations to which to apply the policy, then click . Optional: Select the host groups to which to assign the policy. Click Submit . 8.2. Viewing a compliance policy You can preview the rules which will be applied by specific OpenSCAP content and profile combination. This is useful when you plan policies. Prerequisites Your user account has a role assigned that has the view_policies permission. Procedure In the Satellite web UI, navigate to Hosts > Compliance > Policies . In the Actions column of the required policy, click Show Guide or select it from the list. 8.3. Editing a compliance policy In the Satellite web UI, you can edit compliance policies. Puppet agent applies an edited policy to the host on the run. By default, this occurs every 30 minutes. If you use Ansible, you must run the Ansible role manually again or have configured a recurring remote execution job that runs the Ansible role on hosts. Prerequisites Your user account has a role assigned that has the view_policies and edit_policies permissions. Procedure In the Satellite web UI, navigate to Hosts > Compliance > Policies . Click the name of the required policy. Edit the necessary attributes. Click Submit . 8.4. Deleting a compliance policy In the Satellite web UI, you can delete existing compliance policies. Prerequisites Your user account has a role assigned that has the view_policies and destroy_policies permissions. Procedure In the Satellite web UI, navigate to Hosts > Compliance > Policies . In the Actions column of the required policy, select Delete from the list. Click OK in the confirmation message.
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_security_compliance/Managing_Compliance_Policies_security-compliance
|
7.212. resource-agents
|
7.212. resource-agents 7.212.1. RHBA-2013:0288 - resource-agents bug fix and enhancement update Updated resource-agents packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain a set of scripts to interface with several services to operate in a High Availability (HA) environment for both the Pacemaker and rgmanager service managers. Bug Fixes BZ# 714156 Previously, the status action in the netfs interface failed to write any output to the /var/log/cluster/rgmanager.log file. Consequently, it was not possible to verify if the status check of an NFS mount was successful. The bug has been fixed, and results of the status check are now properly stored in the log file. BZ# 728365 For HA-LVM to work properly, the /boot/initrd.img file, which is used during the boot process, must be synchronized with the /etc/lvm/lvm.conf file. Previously, the HA-LVM startup failed when lvm.conf was changed without updating initrd.img . With this update, this behavior has been modified. A warning message is now displayed, but the startup is no longer terminated in the described case. BZ#729812 Prior to this update, occasional service failures occurred when starting the clvmd variant of the HA-LVM service on multiple nodes in a cluster at the same time. The start of an HA-LVM resource coincided with another node initializing that same HA-LVM resource. With this update, a patch has been introduced to synchronize the initialization of both resources. As a result, services no longer fail due to the simultaneous initialization. BZ# 817550 When the oracledb.sh script was called with the status argument, it restarted the database after checking its status without any notification to the rgmanager application. This bug has been fixed, and the unwanted restart no longer occurs. BZ# 822244 Previously, the /usr/sbin/tomcat-6.sh script parsed configuration files and set shell variables before starting the Apache Tomcat 6 servlet container. Consequently, the default configuration was ignored. This bug has been fixed and the aforementioned problem no longer occurs. BZ#839181 Previously, an output of HA-LVM commands that contained more than one word, was not correctly parsed. Consequently, starting an HA-LVM service with the rg_test command occasionally failed with the following message: With this update, the underlying source code has been modified to add quotation marks around variables that expand to more than one word. As a result, the aforementioned startup errors no longer occur. BZ#847335 If the contents of the /proc/mounts file changed during a status check operation of the file system resource agent, the status check could incorrectly detect a missing mount and mark the service as failed. This bug has been fixed and rgmanager 's file system resource agent no longer reports false failures in the described scenario. BZ#848642 Previously, rgmanager did not recognize CIFS (Common Internet File System) mounts in case their corresponding entries in the device field of the /proc/mounts file contained trailing slashes. With this update, a patch has been introduced to remove trailing slashes from device names when reading the contents of /proc/mounts . As a result, CIFS mounts are now recognized properly. BZ#853249 Prior to this update, when running a file system depending on an LVM resource in a service, and that LVM resource failed to start, the subsequent attempt to unregister the file system resource failed. This bug has been fixed, and a file system resource can now be successfully unregistered after a failed mount operation. BZ#860328 Previously, when using the HA-LVM resource agent in the Pacemaker cluster environment, several errors and failed actions occurred. With this update, several scripts have been added to prevent these errors. These scripts repair the treatment of whitespace within HA-LVM and the processing of non-zero codes in rgmanager . In addition, the member_util utility has been updated to use Corosync and Pacemaker when rgmanager is not present on the system. BZ#860981 Previously, when a node lost access to the storage device, HA-LVM was unable to deactivate the volume group for the services running in that node. The underlying source code has been modified to allow services to migrate to other machines that still have access to storage devices, thus preventing this bug. BZ# 869695 Previously, SAP instances started by the SAPInstance cluster resource agent inherited limits on system resources for the root user. Higher limits were needed on the maximum number of open files ( ulimit -n ), the maximum stack size ( ulimit -s ), and the maximum size of data segments ( ulimit -d ). With this update, the SAPInstance agent has been modified to accept limits specified in the /usr/sap/services file. As a result, system resources limits can now be specified manually. Enhancements BZ#773478 With this update, the /usr/share/cluster/script.sh resource, used mainly by the rgmanager application, has been enhanced to provide more informative reports on causes of internal errors. BZ#822053 With this update, the nfsrestart option has been added to both the fs and clusterfs resource agents. This option provides a way to forcefully restart NFS servers and allow a clean unmount of an exported file system. BZ#834293 The pacemaker SAPInstance and SAPDatabase resource agents have been updated with the latest upstream patches. BZ# 843049 A new prefer_interface parameter has been added to the rgmanager ip.sh resource agent. This parameter is used for adding an IP address to a particular network interface when a cluster node has multiple active interfaces with IP addresses on the same subnetwork. All users of resource-agents are advised to upgrade to these updated packages, which fix these bugs and add these enhancements 7.212.2. RHEA-2013:1494 - resource-agents enhancement update Updated resource-agents packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain a set of scripts to interface with several services to operate in a High Availability (HA) environment for both the Pacemaker and rgmanager service managers. Enhancement BZ# 1001519 This update adds support for the Pacemaker resource agents under the Heartbeat OCF provider. Users of resource-agents are advised to upgrade to these updated packages, which add this enhancement. 7.212.3. RHBA-2013:1007 - resource-agents bug fix and enhancement update Updated resource-agents packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The resource-agents packages contain a set of scripts to interface with several services to operate in a High Availability (HA) environment for both the Pacemaker and rgmanager service managers. Bug Fix BZ# 978775 Usage of lvm.sh with tags resulted in the stripping of cluster tags when the node rejoined the cluster. This was because the lvm.sh agent was unable to accurately detect the tag represented by a cluster node. Thus, the active logical volume on a cluster node failed when another node re-joined the cluster. This update properly detects whether tags represent a cluster node, the node-name, or, if fqdn is returned by the corosync-quorumtool -l output. When nodes re-join the cluster, tags are not stripped of LVM volume groups, and the volume group no longer fails on other nodes. Enhancement BZ# 972931 versions of the Oracle Resource Agent were only tested against Oracle 10. With this update, support for the Oracle Database 11g has been added to the oracledb, orainstance, and oralistener resource agents. Users of resource-agents are advised to upgrade to these updated packages, which fix this bug and add this enhancement.
|
[
"too many arguments"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/resource-agents
|
Chapter 20. Cache Writing Modes
|
Chapter 20. Cache Writing Modes Red Hat JBoss Data Grid presents configuration options with a single or multiple cache stores. This allows it to store data in a persistent location, for example a shared JDBC database or a local file system. JBoss Data Grid supports two caching modes: Write-Through (Synchronous) Write-Behind (Asynchronous) Report a bug 20.1. Write-Through Caching The Write-Through (or Synchronous) mode in Red Hat JBoss Data Grid ensures that when clients update a cache entry (usually via a Cache.put() invocation), the call does not return until JBoss Data Grid has located and updated the underlying cache store. This feature allows updates to the cache store to be concluded within the client thread boundaries. Report a bug 20.1.1. Write-Through Caching Benefits and Disadvantages Write-Through Caching Benefits The primary advantage of the Write-Through mode is that the cache and cache store are updated simultaneously, which ensures that the cache store remains consistent with the cache contents. Write-Through Caching Disadvantages Due to the cache store being updated simultaneously with the cache entry, there is a possibility of reduced performance for cache operations that occur concurrently with the cache store accesses and updates. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 20.1.2. Write-Through Caching Configuration (Library Mode) No specific configuration operations are required to configure a Write-Through or synchronous cache store. All cache stores are Write-Through or synchronous unless explicitly marked as Write-Behind or asynchronous. The following procedure demonstrates a sample configuration file of a Write-Through unshared local file cache store. Procedure 20.1. Configure a Write-Through Local File Cache Store The name parameter specifies the name of the namedCache to use. The fetchPersistentState parameter determines whether the persistent state is fetched when joining a cluster. Set this to true if using a replication and invalidation in a clustered environment. Additionally, if multiple cache stores are chained, only one cache store can have this property enabled. If a shared cache store is used, the cache does not allow a persistent state transfer despite this property being set to true . The fetchPersistentState parameter is false by default. The ignoreModifications parameter determines whether operations that modify the cache (e.g. put, remove, clear, store, etc.) do not affect the cache store. As a result, the cache store can become out of sync with the cache. The purgeOnStartup parameter specifies whether the cache is purged when initially started. The shared parameter is used when multiple cache instances share a cache store and is now defined at the cache store level. This parameter can be set to prevent multiple cache instances writing the same modification multiple times. Valid values for this parameter are true and false . Report a bug
|
[
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"urn:infinispan:config:6.2\"> <global /> <default /> <namedCache name=\"persistentCache\"> <persistence> <singleFile fetchPersistentState=\"true\" ignoreModifications=\"false\" purgeOnStartup=\"false\" shared=\"false\" location=\"USD{java.io.tmpdir}\"/> </persistence> </namedCache> </infinispan>"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-cache_writing_modes
|
3.15. Searching for Virtual Machines
|
3.15. Searching for Virtual Machines The following table describes all search options for virtual machines. Note Currently, the Network Label , Custom Emulated Machine , and Custom CPU Type properties are not supported search parameters. Table 3.11. Searching for Virtual Machines Property (of resource or resource-type) Type Description (Reference) Hosts. hosts-prop Depends on property type The property of the hosts associated with the virtual machine. Templates. templates-prop Depends on property type The property of the templates associated with the virtual machine. Events. events-prop Depends on property type The property of the events associated with the virtual machine. Users. users-prop Depends on property type The property of the users associated with the virtual machine. Storage. storage-prop Depends on the property type The property of storage devices associated with the virtual machine. Vnic. vnic-prop Depends on the property type The property of the VNIC associated with the virtual machine. name String The name of the virtual machine. status List The availability of the virtual machine. ip Integer The IP address of the virtual machine. uptime Integer The number of minutes that the virtual machine has been running. domain String The domain (usually Active Directory domain) that groups these machines. os String The operating system selected when the virtual machine was created. creationdate Date The date on which the virtual machine was created. address String The unique name that identifies the virtual machine on the network. cpu_usage Integer The percent of processing power used. mem_usage Integer The percentage of memory used. network_usage Integer The percentage of network used. memory Integer The maximum memory defined. apps String The applications currently installed on the virtual machine. cluster List The cluster to which the virtual machine belongs. pool List The virtual machine pool to which the virtual machine belongs. loggedinuser String The name of the user currently logged in to the virtual machine. tag List The tags to which the virtual machine belongs. datacenter String The data center to which the virtual machine belongs. type List The virtual machine type (server or desktop). quota String The name of the quota associated with the virtual machine. description String Keywords or text describing the virtual machine, optionally used when creating the virtual machine. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. next_run_configuration_exists Boolean The virtual machine has pending configuration changes. Example Vms: template.name = Win* and user.name = "" This example returns a list of virtual machines whose base template name begins with Win and are assigned to any user. Example Vms: cluster = Default and os = windows7 This example returns a list of virtual machines that belong to the Default cluster and are running Windows 7.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/searching_for_virtual_machines
|
Installing with RHEL image mode
|
Installing with RHEL image mode Red Hat build of MicroShift 4.18 Embedding MicroShift in a bootc image Red Hat OpenShift Documentation Team
|
[
"FROM registry.redhat.io/rhel9/rhel-bootc:9.4 ARG USHIFT_VER=4.17 RUN dnf config-manager --set-enabled rhocp-USD{USHIFT_VER}-for-rhel-9-USD(uname -m)-rpms --set-enabled fast-datapath-for-rhel-9-USD(uname -m)-rpms RUN dnf install -y firewalld microshift && systemctl enable microshift && dnf clean all Create a default 'redhat' user with the specified password. Add it to the 'wheel' group to allow for running sudo commands. ARG USER_PASSWD RUN if [ -z \"USD{USER_PASSWD}\" ] ; then echo USER_PASSWD is a mandatory build argument && exit 1 ; fi RUN useradd -m -d /var/home/redhat -G wheel redhat && echo \"redhat:USD{USER_PASSWD}\" | chpasswd Mandatory firewall configuration RUN firewall-offline-cmd --zone=public --add-port=22/tcp && firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 && firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 Create a systemd unit to recursively make the root filesystem subtree shared as required by OVN images RUN cat > /etc/systemd/system/microshift-make-rshared.service <<'EOF' [Unit] Description=Make root filesystem shared Before=microshift.service ConditionVirtualization=container [Service] Type=oneshot ExecStart=/usr/bin/mount --make-rshared / [Install] WantedBy=multi-user.target EOF RUN systemctl enable microshift-make-rshared.service",
"PULL_SECRET=~/.pull-secret.json USER_PASSWD=<your_redhat_user_password> 1 IMAGE_NAME=microshift-4.17-bootc sudo podman build --authfile \"USD{PULL_SECRET}\" -t \"USD{IMAGE_NAME}\" --build-arg USER_PASSWD=\"USD{USER_PASSWD}\" -f Containerfile",
"sudo podman images \"USD{IMAGE_NAME}\"",
"REPOSITORY TAG IMAGE ID CREATED SIZE localhost/microshift-4.17-bootc latest 193425283c00 2 minutes ago 2.31 GB",
"REGISTRY_URL=quay.io sudo podman login \"USD{REGISTRY_URL}\" 1",
"REGISTRY_IMG=<myorg/mypath>/\"USD{IMAGE_NAME}\" 1 2 IMAGE_NAME=<microshift-4.17-bootc> 3 sudo podman push localhost/\"USD{IMAGE_NAME}\" \"USD{REGISTRY_URL}/USD{REGISTRY_IMG}\""
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html-single/installing_with_rhel_image_mode/index
|
Chapter 14. Backup and restore
|
Chapter 14. Backup and restore 14.1. Backing up and restoring virtual machines You back up and restore virtual machines by using the OpenShift API for Data Protection (OADP) . Important OADP for OpenShift Virtualization is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application with the kubevirt and openshift plugins . Back up virtual machines by creating a Backup custom resource (CR) . Restore the Backup CR by creating a Restore CR . 14.1.1. Additional resources OADP features and plugins Troubleshooting
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/virtualization/backup-and-restore
|
Chapter 12. Preparing and uploading cloud images by using RHEL image builder
|
Chapter 12. Preparing and uploading cloud images by using RHEL image builder RHEL image builder can create custom system images ready for use on various cloud platforms. To use your customized RHEL system image in a cloud, create the system image with RHEL image builder by using the chosen output type, configure your system for uploading the image, and upload the image to your cloud account. You can push customized image clouds through the Image Builder application in the RHEL web console, available for a subset of the service providers that we support, such as AWS and Microsoft Azure clouds. See Creating and automatically uploading images directly to AWS Cloud AMI and Creating and automatically uploading VHD images directly to Microsoft Azure cloud . 12.1. Preparing and uploading AMI images to AWS You can create custom images and can update them, either manually or automatically, to the AWS cloud with RHEL image builder. 12.1.1. Preparing to manually upload AWS AMI images Before uploading an AWS AMI image, you must configure a system for uploading the images. Prerequisites You must have an Access Key ID configured in the AWS IAM account manager . You must have a writable S3 bucket prepared. See Creating S3 bucket . Procedure Install Python 3 and the pip tool: Install the AWS command-line tools with pip : Set your profile. The terminal prompts you to provide your credentials, region and output format: Define a name for your bucket and create a bucket: Replace bucketname with the actual bucket name. It must be a globally unique name. As a result, your bucket is created. To grant permission to access the S3 bucket, create a vmimport S3 Role in the AWS Identity and Access Management (IAM), if you have not already done so in the past: Create a trust-policy.json file with the trust policy configuration, in the JSON format. For example: Create a role-policy.json file with the role policy configuration, in the JSON format. For example: Create a role for your Amazon Web Services account, by using the trust-policy.json file: Embed an inline policy document, by using the role-policy.json file: Additional resources Using high-level (s3) commands with the AWS CLI 12.1.2. Manually uploading an AMI image to AWS by using the CLI You can use RHEL image builder to build ami images and manually upload them directly to Amazon AWS Cloud service provider, by using the CLI. Prerequisites You have an Access Key ID configured in the AWS IAM account manager. You must have a writable S3 bucket prepared. See Creating S3 bucket . You have a defined blueprint. Procedure Using the text editor, create a configuration file with the following content: Replace values in the fields with your credentials for accessKeyID , secretAccessKey , bucket , and region . The IMAGE_KEY value is the name of your VM Image to be uploaded to EC2. Save the file as CONFIGURATION-FILE .toml and close the text editor. Start the compose to upload it to AWS: Replace: blueprint-name with the name of the blueprint you created image-type with the ami image type. image-key with the name of your VM Image to be uploaded to EC2. configuration-file .toml with the name of the configuration file of the cloud provider. Note You must have the correct AWS Identity and Access Management (IAM) settings for the bucket you are going to send your customized image to. You have to set up a policy to your bucket before you are able to upload images to it. Check the status of the image build: After the image upload process is complete, you can see the "FINISHED" status. Verification To confirm that the image upload was successful: Access EC2 on the menu and select the correct region in the AWS console. The image must have the available status, to indicate that it was successfully uploaded. On the dashboard, select your image and click Launch . Additional Resources Required service role to import a VM 12.1.3. Creating and automatically uploading images to the AWS Cloud AMI You can create a (.raw) image by using RHEL image builder, and choose to check the Upload to AWS checkbox to automatically push the output image that you create directly to the Amazon AWS Cloud AMI service provider. Prerequisites You must have root or wheel group user access to the system. You have opened the RHEL image builder interface of the RHEL web console in a browser. You have created a blueprint. See Creating a blueprint in the web console interface . You must have an Access Key ID configured in the AWS IAM account manager. You must have a writable S3 bucket prepared. Procedure In the RHEL image builder dashboard, click the blueprint name that you previously created. Select the tab Images . Click Create Image to create your customized image. The Create Image window opens. From the Type drop-down menu list, select Amazon Machine Image Disk (.raw) . Check the Upload to AWS checkbox to upload your image to the AWS Cloud and click . To authenticate your access to AWS, type your AWS access key ID and AWS secret access key in the corresponding fields. Click . Note You can view your AWS secret access key only when you create a new Access Key ID. If you do not know your Secret Key, generate a new Access Key ID. Type the name of the image in the Image name field, type the Amazon bucket name in the Amazon S3 bucket name field and type the AWS region field for the bucket you are going to add your customized image to. Click . Review the information and click Finish . Optionally, click Back to modify any incorrect detail. Note You must have the correct IAM settings for the bucket you are going to send your customized image. This procedure uses the IAM Import and Export, so you have to set up a policy to your bucket before you are able to upload images to it. For more information, see Required Permissions for IAM Users . A pop-up on the upper right informs you of the saving progress. It also informs that the image creation has been initiated, the progress of this image creation and the subsequent upload to the AWS Cloud. After the process is complete, you can see the Image build complete status. In a browser, access Service->EC2 . On the AWS console dashboard menu, choose the correct region . The image must have the Available status, to indicate that it is uploaded. On the AWS dashboard, select your image and click Launch . A new window opens. Choose an instance type according to the resources you need to start your image. Click Review and Launch . Review your instance start details. You can edit each section if you need to make any changes. Click Launch Before you start the instance, select a public key to access it. You can either use the key pair you already have or you can create a new key pair. Follow the steps to create a new key pair in EC2 and attach it to the new instance. From the drop-down menu list, select Create a new key pair . Enter the name to the new key pair. It generates a new key pair. Click Download Key Pair to save the new key pair on your local system. Then, you can click Launch Instance to start your instance. You can check the status of the instance, which displays as Initializing . After the instance status is running , the Connect button becomes available. Click Connect . A window appears with instructions on how to connect by using SSH. Select A standalone SSH client as the preferred connection method to and open a terminal. In the location you store your private key, ensure that your key is publicly viewable for SSH to work. To do so, run the command: Connect to your instance by using its Public DNS: Type yes to confirm that you want to continue connecting. As a result, you are connected to your instance over SSH. Verification Check if you are able to perform any action while connected to your instance by using SSH. Additional resources Open a case on Red Hat Customer Portal Connecting to your Linux instance by using SSH 12.2. Preparing and uploading VHD images to Microsoft Azure You can create custom images and can update them, either manually or automatically, to the Microsoft Azure cloud with RHEL image builder. 12.2.1. Preparing to manually upload Microsoft Azure VHD images To create a VHD image that you can manually upload to Microsoft Azure cloud, you can use RHEL image builder. Prerequisites You must have a Microsoft Azure resource group and storage account. You have Python installed. The AZ CLI tool depends on python. Procedure Import the Microsoft repository key: Create a local azure-cli.repo repository with the following information. Save the azure-cli.repo repository under /etc/yum.repos.d/ : Install the Microsoft Azure CLI: Note The downloaded version of the Microsoft Azure CLI package can vary depending on the current available version. Run the Microsoft Azure CLI: The terminal shows the following message Note, we have launched a browser for you to login. For old experience with device code, use "az login --use-device-code . Then, the terminal opens a browser with a link to https://microsoft.com/devicelogin from where you can login. Note If you are running a remote (SSH) session, the login page link will not open in the browser. In this case, you can copy the link to a browser and login to authenticate your remote session. To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the device code to authenticate. List the keys for the storage account in Microsoft Azure: Replace resource-group-name with name of your Microsoft Azure resource group and storage-account-name with name of your Microsoft Azure storage account. Note You can list the available resources using the following command: Make note of value key1 in the output of the command. Create a storage container: Replace storage-account-name with name of the storage account. Additional resources Microsoft Azure CLI. 12.2.2. Manually uploading VHD images to Microsoft Azure cloud After you have created your customized VHD image, you can manually upload it to the Microsoft Azure cloud. Prerequisites Your system must be set up for uploading Microsoft Azure VHD images. See Preparing to upload Microsoft Azure VHD images . You must have a Microsoft Azure VHD image created by RHEL image builder. In the GUI, use the Azure Disk Image (.vhd) image type. In the CLI, use the vhd output type. Procedure Push the image to Microsoft Azure and create an instance from it: After the upload to the Microsoft Azure Blob storage completes, create a Microsoft Azure image from it: Note Because the images that you create with RHEL image builder generate hybrid images that support to both the V1 = BIOS and V2 = UEFI instances types, you can specify the --hyper-v-generation argument. The default instance type is V1. Verification Create an instance either with the Microsoft Azure portal, or a command similar to the following: Use your private key via SSH to access the resulting instance. Log in as azure-user . This username was set on the step. Additional Resources Composing an image for the .vhd format fails (Red Hat Knowledgebase) 12.2.3. Creating and automatically uploading VHD images to Microsoft Azure cloud You can create .vhd images by using RHEL image builder that will be automatically uploaded to a Blob Storage of the Microsoft Azure Cloud service provider. Prerequisites You have root access to the system. You have access to the RHEL image builder interface of the RHEL web console. You created a blueprint. See Creating a RHEL image builder blueprint in the web console interface . You have a Microsoft Storage Account created. You have a writable Blob Storage prepared. Procedure In the RHEL image builder dashboard, select the blueprint you want to use. Click the Images tab. Click Create Image to create your customized .vhd image. The Create image wizard opens. Select Microsoft Azure (.vhd) from the Type drop-down menu list. Check the Upload to Azure checkbox to upload your image to the Microsoft Azure Cloud. Enter the Image Size and click . On the Upload to Azure page, enter the following information: On the Authentication page, enter: Your Storage account name. You can find it on the Storage account page, in the Microsoft Azure portal . Your Storage access key : You can find it on the Access Key Storage page. Click . On the Authentication page, enter: The image name. The Storage container . It is the blob container to which you will upload the image. Find it under the Blob service section, in the Microsoft Azure portal . Click . On the Review page, click Create . The RHEL image builder and upload processes start. Access the image you pushed into Microsoft Azure Cloud . Access the Microsoft Azure portal . In the search bar, type "storage account" and click Storage accounts from the list. On the search bar, type "Images" and select the first entry under Services . You are redirected to the Image dashboard . On the navigation panel, click Containers . Find the container you created. Inside the container is the .vhd file you created and pushed by using RHEL image builder. Verification Verify that you can create a VM image and launch it. In the search bar, type images account and click Images from the list. Click +Create . From the dropdown list, choose the resource group you used earlier. Enter a name for the image. For the OS type , select Linux . For the VM generation , select Gen 2 . Under Storage Blob , click Browse and click through the storage accounts and container until you reach your VHD file. Click Select at the end of the page. Choose an Account Type, for example, Standard SSD . Click Review + Create and then Create . Wait a few moments for the image creation. To launch the VM, follow the steps: Click Go to resource . Click + Create VM from the menu bar on the header. Enter a name for your virtual machine. Complete the Size and Administrator account sections. Click Review + Create and then Create . You can see the deployment progress. After the deployment finishes, click the virtual machine name to retrieve the public IP address of the instance to connect by using SSH. Open a terminal to create an SSH connection to connect to the VM. Additional resources Microsoft Azure Storage Documentation Create a Microsoft Azure Storage account Open a case on Red Hat Customer Portal Help + support Contacting Red Hat 12.2.4. Uploading VMDK images and creating a RHEL virtual machine in vSphere With RHEL image builder, you can create customized VMware vSphere system images, either in the Open virtualization format ( .ova ) or in the Virtual disk ( .vmdk ) format. You can upload these images to the VMware vSphere client. You can upload the .vmdk or .ova image to VMware vSphere using the govc import.vmdk CLI tool. The vmdk you create contains the cloud-init package installed and you can use it to provision users by using user data, for example. Note Uploading vmdk images by using the VMware vSphere GUI is not supported. Prerequisites You created a blueprint with username and password customizations. You created a VMware vSphere image either in the .ova or .vmdk format by using RHEL image builder and downloaded it to your host system. You installed and configured the govc CLI tool, to be able use the import.vmdk command. Procedure Configure the following values in the user environment with the GOVC environment variables: Navigate to the directory where you downloaded your VMware vSphere image. Launch the VMware vSphere image on vSphere by following the steps: Import the VMware vSphere image in to vSphere: For the .ova format: Create the VM in vSphere without powering it on: For the .ova format, replace the line -firmware=efi -disk=" foldername /composer-api.vmdk" \ with `-firmware=efi -disk=" foldername /composer-api.ova" \ Power-on the VM: Retrieve the VM IP address: Use SSH to log in to the VM, using the username and password you specified in your blueprint: Note If you copied the .vmdk image from your local host to the destination using the govc datastore.upload command, using the resulting image is not supported. There is no option to use the import.vmdk command in the vSphere GUI and as a result, the vSphere GUI does not support the direct upload. As a consequence, the .vmdk image is not usable from the vSphere GUI. 12.2.5. Creating and automatically uploading VMDK images to vSphere using image builder GUI You can build VMware images by using the RHEL image builder GUI tool and automatically push the images directly to your vSphere instance. This avoids the need to download the image file and push it manually. The vmdk you create contains the cloud-init package installed and you can use it to provision users by using user data, for example. To build .vmdk images by using RHEL image builder and push them directly to vSphere instances service provider, follow the steps: Prerequisites You are a member of the root or the weldr group. You have opened link:https://localhost:9090/RHEL image builder in a browser. You have created a blueprint. See Creating a RHEL image builder blueprint in the web console interface . You have a vSphere Account . Procedure For the blueprint you created, click the Images tab . Click Create Image to create your customized image. The Image type window opens. In the Image type window: From the dropdown menu, select the Type: VMware vSphere (.vmdk). Check the Upload to VMware checkbox to upload your image to the vSphere. Optional: Set the size of the image you want to instantiate. The minimal default size is 2 GB. Click . In the Upload to VMware window, under Authentication , enter the following details: Username : username of the vSphere account. Password : password of the vSphere account. In the Upload to VMware window, under Destination , enter the following details about the image upload destination: Image name : a name for the image. Host : The URL of your VMware vSphere. Cluster : The name of the cluster. Data center : The name of the data center. Data store :The name of the Data store. Click . In the Review window, review the details of the image creation and click Finish . You can click Back to modify any incorrect detail. RHEL image builder adds the compose of a RHEL vSphere image to the queue, and creates and uploads the image to the Cluster on the vSphere instance you specified. Note The image build and upload processes take a few minutes to complete. After the process is complete, you can see the Image build complete status. Verification After the image status upload is completed successfully, you can create a Virtual Machine (VM) from the image you uploaded and login into it. To do so: Access VMware vSphere Client. Search for the image in the Cluster on the vSphere instance you specified. Select the image you uploaded. Right-click the selected image. Click New Virtual Machine . A New Virtual Machine window opens. In the New Virtual Machine window, provide the following details: Select New Virtual Machine . Select a name and a folder for your VM. Select a computer resource: choose a destination computer resource for this operation. Select storage: For example, select NFS-Node1 Select compatibility: The image should be BIOS only. Select a guest operating system: For example, select Linux and Red Hat Fedora (64-bit) . Customize hardware : When creating a VM, on the Device Configuration button on the upper right, delete the default New Hard Disk and use the drop-down to select an Existing Hard Disk disk image: Ready to complete: Review the details and click Finish to create the image. Navigate to the VMs tab. From the list, select the VM you created. Click the Start button from the panel. A new window appears, showing the VM image loading. Log in with the credentials you created for the blueprint. You can verify if the packages you added to the blueprint are installed. For example: Additional resources Introduction to vSphere Installation and Setup 12.3. Preparing and uploading custom GCE images to GCP You can create custom images and then automatically update them to the Oracle Cloud Infrastructure (OCI) instance with RHEL image builder. 12.3.1. Uploading images to GCP with RHEL image builder With RHEL image builder, you can build a gce image, provide credentials for your user or GCP service account, and then upload the gce image directly to the GCP environment. 12.3.1.1. Configuring and uploading a gce image to GCP by using the CLI Set up a configuration file with credentials to upload your gce image to GCP by using the RHEL image builder CLI. Warning You cannot manually import gce image to GCP, because the image will not boot. You must use either gcloud or RHEL image builder to upload it. Prerequisites You have a valid Google account and credentials to upload your image to GCP. The credentials can be from a user account or a service account. The account associated with the credentials must have at least the following IAM roles assigned: roles/storage.admin - to create and delete storage objects roles/compute.storageAdmin - to import a VM image to Compute Engine. You have an existing GCP bucket. Procedure Use a text editor to create a gcp-config.toml configuration file with the following content: GCP_BUCKET points to an existing bucket. It is used to store the intermediate storage object of the image which is being uploaded. GCP_STORAGE_REGION is both a regular Google storage region and a dual or multi region. OBJECT_KEY is the name of an intermediate storage object. It must not exist before the upload, and it is deleted when the upload process is done. If the object name does not end with .tar.gz , the extension is automatically added to the object name. GCP_CREDENTIALS is a Base64 -encoded scheme of the credentials JSON file downloaded from GCP. The credentials determine which project the GCP uploads the image to. Note Specifying GCP_CREDENTIALS in the gcp-config.toml file is optional if you use a different mechanism to authenticate with GCP. For other authentication methods, see Authenticating with GCP . Retrieve the GCP_CREDENTIALS from the JSON file downloaded from GCP. Create a compose with an additional image name and cloud provider profile: The image build, upload, and cloud registration processes can take up to ten minutes to complete. Verification Verify that the image status is FINISHED: Additional resources Identity and Access Management Create storage buckets 12.3.1.2. How RHEL image builder sorts the authentication order of different GCP credentials You can use several different types of credentials with RHEL image builder to authenticate with GCP. If RHEL image builder configuration is set to authenticate with GCP using multiple sets of credentials, it uses the credentials in the following order of preference: Credentials specified with the composer-cli command in the configuration file. Credentials configured in the osbuild-composer worker configuration. Application Default Credentials from the Google GCP SDK library, which tries to automatically find a way to authenticate by using the following options: If the GOOGLE_APPLICATION_CREDENTIALS environment variable is set, Application Default Credentials tries to load and use credentials from the file pointed to by the variable. Application Default Credentials tries to authenticate by using the service account attached to the resource that is running the code. For example, Google Compute Engine VM. Note You must use the GCP credentials to determine which GCP project to upload the image to. Therefore, unless you want to upload all of your images to the same GCP project, you always must specify the credentials in the gcp-config.toml configuration file with the composer-cli command. 12.3.1.2.1. Specifying GCP credentials with the composer-cli command You can specify GCP authentication credentials in the upload target configuration gcp-config.toml file. Use a Base64 -encoded scheme of the Google account credentials JSON file to save time. Procedure Get the encoded content of the Google account credentials file with the path stored in GOOGLE_APPLICATION_CREDENTIALS environment variable, by running the following command: In the upload target configuration gcp-config.toml file, set the credentials: 12.3.1.2.2. Specifying credentials in the osbuild-composer worker configuration You can configure GCP authentication credentials to be used for GCP globally for all image builds. This way, if you want to import images to the same GCP project, you can use the same credentials for all image uploads to GCP. Procedure In the /etc/osbuild-worker/osbuild-worker.toml worker configuration, set the following credential value: 12.4. Preparing and uploading custom images directly to OCI You can create custom images and then automatically update them to the Oracle Cloud Infrastructure (OCI) instance with RHEL image builder. 12.4.1. Creating and automatically uploading custom images to OCI With RHEL image builder, build customized images and automatically push them directly to your Oracle Cloud Infrastructure (OCI) instance. Then, you can start an image instance from the OCI dashboard. Prerequisites You have root or weldr group user access to the system. You have an Oracle Cloud account. You must be granted security access in an OCI policy by your administrator. You have created an OCI Bucket in the OCI_REGION of your choice. Procedure Open the RHEL image builder interface of the web console in a browser. Click Create blueprint . The Create blueprint wizard opens. On the Details page, enter a name for the blueprint, and optionally, a description. Click . On the Packages page, select the components and packages that you want to include in the image. Click . On the Customizations page, configure the customizations that you want for your blueprint. Click . On the Review page, click Create . To create an image, click Create Image . The Create image wizard opens. On the Image output page, complete the following steps: From the "Select a blueprint" drop-down menu, select the blueprint you want. From the "Image output type" drop-down menu, select Oracle Cloud Infrastructure (.qcow2) . Check the "Upload OCI checkbox to upload your image to the OCI. Enter the "image size" . Click . On the Upload to OCI - Authentication page, enter the following mandatory details: User OCID: you can find it in the Console on the page showing the user's details. Private key On the Upload to OCI - Destination page, enter the following mandatory details and click . Image name: a name for the image to be uploaded. OCI bucket Bucket namespace Bucket region Bucket compartment Bucket tenancy Review the details in the wizard and click Finish . RHEL image builder adds the compose of a RHEL .qcow2 image to the queue. Verification Access the OCI dashboard Custom Images. Select the Compartment you specified for the image and locate the image in the Import image table. Click the image name and verify the image information. Additional resources Managing custom images in the OCI. Managing buckets in the OCI. Generating SSH keys. 12.5. Preparing and uploading customized QCOW2 images directly to OpenStack You can create custom .qcow2 images with RHEL image builder, and manually upload them to the OpenStack cloud deployments. 12.5.1. Uploading QCOW2 images to OpenStack With the RHEL image builder tool, you can create customized .qcow2 images that are suitable for uploading to OpenStack cloud deployments, and starting instances there. RHEL image builder creates images in the QCOW2 format, but with further changes specific to OpenStack. Warning Do not mistake the generic QCOW2 image type output format you create by using RHEL image builder with the OpenStack image type, which is also in the QCOW2 format, but contains further changes specific to OpenStack. Prerequisites You have created a blueprint. Procedure Start the compose of a QCOW2 image. Check the status of the building. After the image build finishes, you can download the image. Download the QCOW2 image: Access the OpenStack dashboard and click +Create Image . On the left menu, select the Admin tab. From the System Panel , click Image . The Create An Image wizard opens. In the Create An Image wizard: Enter a name for the image Click Browse to upload the QCOW2 image. From the Format dropdown list, select the QCOW2 - QEMU Emulator . Click Create Image . On the left menu, select the Project tab. From the Compute menu, select Instances . Click the Launch Instance button. The Launch Instance wizard opens. On the Details page, enter a name for the instance. Click . On the Source page, select the name of the image you uploaded. Click . On the Flavor page, select the machine resources that best fit your needs. Click Launch . You can run the image instance using any mechanism (CLI or OpenStack web UI) from the image. Use your private key via SSH to access the resulting instance. Log in as cloud-user . 12.6. Preparing and uploading customized RHEL images to the Alibaba Cloud You can upload a customized .ami images that you created by using RHEL image builder to the Alibaba Cloud. 12.6.1. Preparing to upload customized RHEL images to Alibaba Cloud To deploy a customized RHEL image to the Alibaba Cloud, first you need to verify the customized image. The image needs a specific configuration to boot successfully, because Alibaba Cloud requests the custom images to meet certain requirements before you use it. Note RHEL image builder generates images that conform to Alibaba's requirements. However, Red Hat recommends also using the Alibaba image_check tool to verify the format compliance of your image. Prerequisites You must have created an Alibaba image by using RHEL image builder. Procedure Connect to the system containing the image that you want to check by using the Alibaba image_check tool. Download the image_check tool: Change the file permission of the image compliance tool: Run the command to start the image compliance tool checkup: The tool verifies the system configuration and generates a report that is displayed on your screen. The image_check tool saves this report in the same folder where the image compliance tool is running. Troubleshooting If any of the Detection Items fail, follow the instructions in the terminal to correct it. Additional resources Image Compliance Tool. 12.6.2. Uploading customized RHEL images to Alibaba You can upload a customized AMI image you created by using RHEL image builder to the Object Storage Service (OSS). Prerequisites Your system is set up for uploading Alibaba images. See Preparing for uploading images to Alibaba . You have created an ami image by using RHEL image builder. You have a bucket. See Creating a bucket . You have an active Alibaba Account . You activated OSS . Procedure Log in to the OSS console . In the Bucket menu on the left, select the bucket to which you want to upload an image. In the upper right menu, click the Files tab. Click Upload . A dialog window opens on the right side. Configure the following: Upload To : Choose to upload the file to the Current directory or to a Specified directory. File ACL : Choose the type of permission of the uploaded file. Click Upload . Select the image you want to upload to the OSS Console.. Click Open . Additional resources Upload an object. Creating an instance from custom images. Importing images. 12.6.3. Importing images to Alibaba Cloud To import a customized Alibaba RHEL image that you created by using RHEL image builder to the Elastic Compute Service (ECS), follow the steps: Prerequisites Your system is set up for uploading Alibaba images. See Preparing for uploading images to Alibaba . You have created an ami image by using RHEL image builder. You have a bucket. See Creating a bucket . You have an active Alibaba Account . You activated OSS . You have uploaded the image to Object Storage Service (OSS). See Uploading images to Alibaba . Procedure Log in to the ECS console. On the left-side menu, click Images . On the upper right side, click Import Image . A dialog window opens. Confirm that you have set up the correct region where the image is located. Enter the following information: OSS Object Address : See how to obtain OSS Object Address . Image Name Operating System System Disk Size System Architecture Platform : Red Hat Optional: Provide the following details: Image Format : qcow2 or ami , depending on the uploaded image format. Image Description Add Images of Data Disks The address can be determined in the OSS management console. After selecting the required bucket in the left menu: Select Files section. Click the Details link on the right for the appropriate image. A window appears on the right side of the screen, showing image details. The OSS object address is in the URL box. Click OK . Note The importing process time can vary depending on the image size. The customized image is imported to the ECS Console. Additional resources Notes for importing images. Creating an instance from custom images. Upload an object. 12.6.4. Creating an instance of a customized RHEL image using Alibaba Cloud You can create instances of a customized RHEL image by using the Alibaba ECS Console. Prerequisites You have activated OSS and uploaded your custom image. You have successfully imported your image to ECS Console. See Importing images to Alibaba . Procedure Log in to the ECS console. On the left-side menu, select Instances . In the upper-right corner, click Create Instance . You are redirected to a new window. Complete all the required information. See Creating an instance by using the wizard for more details. Click Create Instance and confirm the order. Note You can see the option Create Order instead of Create Instance , depending on your subscription. As a result, you have an active instance ready for deployment from the Alibaba ECS Console . Additional resources Creating an instance by using a custom image. Create an instance by using the wizard.
|
[
"yum install python3 python3-pip",
"pip3 install awscli",
"aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:",
"BUCKET= bucketname aws s3 mb s3://USDBUCKET",
"{ \"Version\": \"2022-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\": { \"sts:Externalid\": \"vmimport\" } } }] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Action\": [\"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\"], \"Resource\": [\"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/ \"] }, { \"Effect\": \"Allow\", \"Action\": [\"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe \"], \"Resource\": \"*\" }] } USDBUCKET USDBUCKET",
"aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json",
"provider = \"aws\" [settings] accessKeyID = \" AWS_ACCESS_KEY_ID \" secretAccessKey = \"AWS_SECRET_ACCESS_KEY\" bucket = \"AWS_BUCKET\" region = \"AWS_REGION\" key = \"IMAGE_KEY\"",
"composer-cli compose start blueprint-name image-type image-key configuration-file .toml",
"composer-cli compose status",
"chmod 400 <_your-instance-name.pem_>",
"ssh -i <_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>",
"rpm --import https://packages.microsoft.com/keys/microsoft.asc",
"[azure-cli] name=Azure CLI baseurl=https://packages.microsoft.com/yumrepos/vscode enabled=1 gpgcheck=1 gpgkey=https://packages.microsoft.com/keys/microsoft.asc",
"yumdownloader azure-cli rpm -ivh --nodeps azure-cli-2.0.64-1.el7.x86_64.rpm",
"az login",
"az storage account keys list --resource-group <resource_group_name> --account-name <storage_account_name>",
"az resource list",
"az storage container create --account-name <storage_account_name> --account-key <key1_value> --name <storage_account_name>",
"az storage blob upload --account-name <_account_name_> --container-name <_container_name_> --file <_image_-disk.vhd> --name <_image_-disk.vhd> --type page",
"az image create --resource-group <_resource_group_name_> --name <_image_>-disk.vhd --os-type linux --location <_location_> --source https://USD<_account_name_>.blob.core.windows.net/<_container_name_>/<_image_>-disk.vhd - Running",
"az vm create --resource-group <_resource_group_name_> --location <_location_> --name <_vm_name_> --image <_image_>-disk.vhd --admin-username azure-user --generate-ssh-keys - Running",
"GOVC_URL GOVC_DATACENTER GOVC_FOLDER GOVC_DATASTORE GOVC_RESOURCE_POOL GOVC_NETWORK",
"govc import.vmdk ./composer-api.vmdk foldername",
"govc import.ova ./composer-api.ova foldername",
"govc vm.create -net.adapter=vmxnet3 -m=4096 -c=2 -g=rhel8_64Guest -firmware=efi -disk=\" foldername /composer-api.vmdk\" -disk.controller=scsi -on=false vmname",
"govc vm.power -on vmname",
"govc vm.ip vmname",
"ssh admin@<_ip_address_of_the_vm_>",
"rpm -qa | grep firefox",
"provider = \"gcp\" [settings] bucket = \"GCP_BUCKET\" region = \"GCP_STORAGE_REGION\" object = \"OBJECT_KEY\" credentials = \"GCP_CREDENTIALS\"",
"sudo base64 -w 0 cee-gcp-nasa-476a1fa485b7.json",
"sudo composer-cli compose start BLUEPRINT-NAME gce IMAGE_KEY gcp-config.toml",
"sudo composer-cli compose status",
"base64 -w 0 \"USD{GOOGLE_APPLICATION_CREDENTIALS}\"",
"provider = \"gcp\" [settings] provider = \"gcp\" [settings] credentials = \"GCP_CREDENTIALS\"",
"[gcp] credentials = \" PATH_TO_GCP_ACCOUNT_CREDENTIALS \"",
"composer-cli compose start blueprint_name openstack",
"composer-cli compose status",
"composer-cli compose image UUID",
"curl -O https://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/attach/73848/cn_zh/1557459863884/image_check",
"chmod +x image_check",
"./image_check"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/creating-cloud-images-with-composer_composing-a-customized-rhel-system-image
|
function::cmdline_arg
|
function::cmdline_arg Name function::cmdline_arg - Fetch a command line argument. Synopsis Arguments n Argument to get (zero is the command itself) General Syntax cmdline_arg:string(n:long) Description Returns argument the requested argument from the current process or the empty string when there are not that many arguments or there is a problem retrieving the argument. Argument zero is traditionally the command itself.
|
[
"function cmdline_arg:string(n:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-cmdline-arg
|
Chapter 4. Documentation changes
|
Chapter 4. Documentation changes This section details the major documentation updates delivered with Red Hat OpenStack Platform (RHOSP) 17.1, and the changes made to the documentation set that include adding new features, enhancements, and corrections. The section also details the addition of new titles and the removal of retired or replaced titles. Table 4.1. Documentation changes legend Column Meaning Date The date that the documentation change was published. 17.1 versions impacted The RHOSP 17.1 versions that the documentation change impacts. Unless stated otherwise, a change that impacts a particular version also impacts all later versions. Components The RHOSP components that the documentation change impacts. Affected content The RHOSP documents that contain the change or update. Description of change A brief summary of the change to the document. Table 4.2. Document changes Date 17.1 versions impacted Components Affected content Description of change 29 January 2025 17.1.4 Networking BGP advertisement and traffic redirection Corrected the limit for the number of provider networks. 07 January 2025 17.1.4 Security Rotating service account passwords Added procedure for rotating service account passwords 06 January 2025 17.1.3 Security Manually Updating SSL/TLS Certificates Updated command to use a combination of --limit and --tags in the openstack overcloud deploy command, in order to reduce impact and completion time 20 December 2024 17.1.4 Networking Configuring the leaf networks Added a new step to route the various OSP services through BGP networks. 20 December 2024 17.1.4 Upgrades See "Bug fixes" section under, "Red Hat OpenStack Platform 17.1.4 Maintenance Release - November 20, 2024" Added a new bug, BZ 2315341, to the "Bug fixes" section for RHOSP 17.1.4. 27 November 2024 17.1.4 NFV Creating a bare metal nodes definition file OVS-DPDK parameters Added a new Ansible playbook variable, dpdk_extra , and a new tripleo variable, OvsDpdkExtra . 21 November 2024 17.1.4 Networking Configuring the leaf networks New tripleo parameters for configuring the OVN BGP agent and Free Range Routing (FRR) for a graceful restart have been added. 21 November 2024 17.1.4 Deployment, Compute Obtaining images for overcloud nodes If your RHOSP 17.1.3 or earlier deployment includes a filter rule in nftables or iptables with a LOG action, and the kernel command line ( /proc/cmdline ) has console=tty50 , logging actions can cause substantial latency in packet transmission. A Knowledgebase solution describes a workaround that you must apply to avoid this issue. 12 November 2024 17.1.4 Storage Creating and configuring an internal project for the Block Storage service (cinder) , Configuring the image-volume cache You must configure the volumes and gigabytes quota of the internal Block Storage service project to use the image-volume cache. 23 October 2024 17.1.4 Networking Troubleshooting the DNS service Some incorrect steps in the troubleshooting chapter have been corrected. 04 October 2024 17.1.4 Security Manually Updating SSL/TLS Certificates Removed unnecessary step in procedure 12 September 2024 17.1.4 NFV Creating an environment file for your OVS-DPDK customizations Removed an instance of a tripleo parameter that is no longer used, OvsEnableDpdk . 12 September 2024 17.1.4 NFV Sample DPDK SR-IOV YAML and Jinja2 files Errors have been corrected in the sample DPDK SR-IOV YAML and Jinja2 files. 06 September 2024 17.1.2 Networking Migrating to the OVN mechanism driver , ML2/OVS to ML2/OVN in-place migration: validated and prohibited scenarios . Removed limitation. You can now migrate to the OVN mechanism driver if the original ML2/OVS environment uses iptables_hybrid firewall and trunk ports. 04 September 2024 17.1.3 Security Troubleshooting Active Directory integration , Troubleshooting Red Hat Identity (IdM) integration . Corrected LDAP search commands. 30 Aug 2024 17.1.4 Networking Sample RHOSP dynamic routing topology The network topology diagram for dynamic routing has been corrected. 22 August 2024 17.1.3 Security Migrating instances Removed section 'Disable live migration', please obtain a support exception if you want to disable live migration. 20 August 2024 17.1.3 Bare Metal Provisioning Enabling ISO boot for bare-metal instances Booting an ISO image directly for use as a RAM disk Added procedures that detail how to boot an ISO image directly for use as a RAM disk. 03 July 2024 17.1 Performance and Scaling Tuning the undercloud Added recommendation to increase message sizes. 05 June 2024 17.1 Edge Migrating to a multistack deployment Removed statement that migrating from single stack to multistack is unsupported. 23 May 2024 17.1 Networking Layer 3 high availability with OVN Added a note prohibiting the use of the --ha option when creating an OVN router. 21 May 2024 17.1 Networking Changing Load-balancing service default settings Documented support for IPv6 for the Load-balancing service (octavia) amphora control subnet. 15 May 2024 17.1 Networking Creating custom virtual routers with router flavors Documented the new Networking service (neutron) plug-in, ovn-router-flavors-ha . 15 May 2024 17.1 NFV Saving power in OVS-DPDK deployments Documented new power saving profile. 29 April 2024 17.1 Security Static Media Removed unsupported content 29 March 2024 17.1 Networking Monitoring OVN database status . A new section describes how to monitor OVN databases. 22 March 2024 17.1 NFV Chapter 8. Configuring OVS TC-flower hardware offload Chapter 7. Configuring an SR-IOV deployment Chapter 10. Configuring an OVS-DPDK deployment There is a new chapter written about OVS TC-flower hardware. Chapters on SR-IOV and OVS-DPDK have been rewritten. 21 March 2024 17.1 Compute, Networking Tagging virtual devices Moved this section from the the Configuring Red Hat OpenStack Platform Networking guide to the Creating and managing instances guide. The commands have been updated and the new content includes how to tag both block devices and virtual NICs while attaching them to an existing instance. 14 March 2024 17.1 Storage Configuring NFS storage Added this section to the Block Storage service back ends topic in Configuring persistent storage . 13 March 2024 17.1 Storage Manage and unmanage volumes and their snapshots Added a new topic that describes the reasons and the associated commands, for managing and unmanaging Block Storage volumes and their snapshots. 4 March 2024 17.1 Networking Replacing a bootstrap Controller node Now you can use the original hostname and IP address for the replacement Controller node when you replace a Controller node. 29 February 2024 17.1 Deployment Installing and managing Red Hat OpenStack Platform with director Customizing your Red Hat OpenStack Platform deployment Updated the content in Installing and managing Red Hat OpenStack Platform with director guide to focus only on the core tasks required to deploy a basic RHOSP environment. Content related to optional features and custom configuration are moved into a new guide: Customizing your Red Hat OpenStack Platform deployment . 27 February 2024 17.1 NFV Example Ceph configuration file The topic, "Example Ceph configuration file," has been updated. 19 February 2024 17.1 Networking Configuring floating IP port forwarding Creating port forwarding for a floating IP Two new topics about floating IP port forwarding have been added to Configuring Red Hat OpenStack Platform networking . 16 February 2024 17.1 Security Using Fernet keys for encryption in the overcloud Procedure that is no longer valid for RHOSP 17 as mistral is not included. 12 February 2024 17.1 Networking Deploying Ceph in your dynamic routing environment The first note in the topic has been updated. 5 February 2024 17.1 NFV Sample DPDK SR-IOV YAML and Jinja2 files The chapter containing sample YAML files has been updated. 2 February 2024 17.1 NFV Configuring an SR-IOV deployment Chapter 7 on SR-IOV has been completely rewritten. 25 January 2024 17.1 Networking Defining leaf roles and attaching networks The procedure has changed significantly. 25 January 2024 17.1 NFV Preventing packet loss by managing RX-TX queue size The procedure has been rewritten. 24 January 2024 17.1 Networking Migrating the ML2 mechanism driver from OVS to OVN Now you can migrate to OVN from OVS with VLAN tenant networks and DVR. Also clarified that the environment files shown are just examples, to be replaced with your own files. 23 January 2024 17.1 Security Managing OpenStack Identity resources Old and duplicated material is removed from the Identity resources guide 17 January 2024 17.1 Networking Deploying the DNS service Deploying the DNS service with pre-existing BIND 9 servers A step was added to the two deployment topics instructing administrators to add the name server records (NS records) for the child zones that reside in the DNS server (designate) pool. 15 January 2024 17.1 Edge Updating the central location Deploying edge nodes without storage Deploying edge sites with hyperconverged storage A step was added to several procedures to instruct users to re-run the network provisioning command on the central location, if the network_data.yaml template includes additional networks which were not included when networks were provisioned for the central location. 17 January 2024 17.1.2 director Operator Upgrading an overcloud on a Red Hat OpenShift Container Platform cluster with director Operator (16.2 to 17.1) Added a chapter about how to upgrade an overcloud on RHOCP with director Operator from RHOSP 16.2 to RHOSP 17.1. 17 January 2024 17.1 Upgrades Known issues that might block an upgrade Known issues were removed for the following BZs: BZ#2235621 BZ#2237743 BZ#2228818 17 January 2024 17.1 Updates Validating RHOSP before the undercloud update Validating RHOSP after the overcloud update Updated note to describe the SKIPPED and FAILED statuses that might occur when you run a validation. 17 January 2024 17.1 Upgrades Overcloud adoption for multi-cell environments Added a new module that provides an example of adopting the overcloud in multi-cell environments. 12 January 2024 17.1 DCN Managing separate heat stacks Removed example of manually creating file that is created automatically 11 January 2024 17.1 Networking Deploying Ceph in your dynamic routing environment Added a new topic about how to deploy Red Hat Ceph Storage. 10 January 2024 17.1 Networking DVR known issues and caveats Refined an item about DHCP in an ML2/OVS, DVR environment: "For ML2/OVS environments, the DHCP server is not distributed and is deployed on a Controller node. The ML2/OVS neutron DCHP agent, which manages the DHCP server, is deployed in a highly available configuration on the Controller nodes, regardless of the routing design (centralized or DVR)." 10 January 2024 17.1 NFV Other parameters Added an IMPORTANT admonition under VhostuserSocketGroup . 10 January 2024 17.1 NFV Tested NICs for NFV Rewrote topic. 8 January 2024 17.1 Networking Configuring policy-based routing Corrected example interface entry: changed route_table: 2 to table: 2 . 8 January 2024 17.1 NFV Launching an RT-KVM instance Corrected a create flavor command example. 30 November 2023 17.1 Security Secure metadef APIs The concept and procedure have been updated for metadef APIs 30 November 2023 17.1 Networking Configuring the L2 population driver The topic, "Configuring the L2 population driver" has been rewritten. 20 November 2023 17.1 Networking Creating secure HTTP load balancers Changed prerequisities for procedures in Chapter 9. 15 November 2023 17.1 Security Implementing TLS-e with Ansible Added optional step to include CertmongerKerberosRealm parameter when the IPA realm does not match the IPA domain. 15 November 2023 17.1 Edge Deploying distributed compute node architecture with TLS-e Removed unnecessary (but non-impactful) steps from instructions for TLSe in DCN guide 7 November 2023 17.1 All Example: Providing feedback on Red Hat documentation Replaced the Direct Documentation Feedback (DDF) instructions with the Create Issue Jira form link. DDF was removed for Red Hat OpenStack Platform, and feedback must now be provided in Jira. 03 November 2023 17.1 Networking Customizing NIC mappings for pre-provisioned nodes Modified the topic, "Customizing NIC mappings for pre-provisioned nodes." 03 November 2023 17.1 Networking Network interface configuration options Corrected example for a Linux bond. 02 November 2023 17.1 Networking High-level changes in Red Hat OpenStack Platform 17.1 Added item: "In ML2/OVN deployments, you can enable egress minimum and maximum bandwidth policies for hardware offloaded ports." 2 November 2023 17.1 Networking ML2/OVS to ML2/OVN in-place migration scenarios that have not been validated Added known issue prohibiting migration to the OVN mechanism driver if your original ML2/OVS environment includes iptables hybrid firewall and trunk ports. 01 November 2023 17.1 Networking Migration constraints Re-wrote sub-topic, "Live migration on ML2/OVS deployments." 31 October 2023 17.1 Networking Deploying a spine-leaf enabled overcloud Deploying a spine-leaf enabled overcloud Removed the VIP definition file, spine-leaf-vip-data.yaml , from the overcloud deploy command example. 30 October 2023 17.1 Networking Adding a composable network Configuring DNS endpoints Described how to use the CloudName{network.name} definition to set the DNS name for an API endpoint on a composable network that uses a virtual IP. 30 October 2023 17.1 NFV Configuring components of OVS hardware offload Added note about Red Hat Enterprise Linux Traffic Control (TC) subsystem supporting connection tracking (conntrack) helpers or application layer gateways (ALGs). 24 October 2023 17.1 Networking Configuring Network service availability zones with ML2/OVN Considerations for networking on DCN architecture Added an important admonition about router gateway ports. 24 October 2023 17.1 Networking Specifying the name that DNS assigns to port Preparing the undercloud Added an important admonition about internal DNS resolution for port names. 24 October 2023 17.1 Networking Adding a new leaf to a spine-leaf deployment Modified step 8, and added three new steps (9-11). 23 October 2023 17.1 Networking Performing basic ICMP testing within the ML2/OVN namespace Clarified login example (step 4). 20 October 2023 17.1 Networking Exporting the DNS service pool configuration Updated the procedure to describe how to run the command inside a container. 17 October 2023 17.1 Networking Setting the subnet for virtual IP addresses Removed mention of the VipSubnetMap parameter, plus some other changes made. 12 October 2023 17.1 Storage Creating and managing images Content about creating images has been moved to its own chapter called 'Creating RHEL KVM or RHOSP-compatible images'. 05 October 2023 17.1 Networking Configuring DNS as a service Instances of the tripleo template filename have changed from enable-designate.yaml to designate.yaml . 05 October 2023 17.1 NFV Deploying OVN with OVS-DPDK and SR-IOV Configuring OVS-DPDK parameters Configuring OVS-DPDK parameters The step about adding custom resources for OVS-DPDK with the resource_registry parameter has been removed. 05 October 2023 17.1 NFV Tested NICs for NFV Replaced the filename, compute-ovs-dpdk.yaml , with the phrase, "j2 network configuration template." 05 October 2023 17.1 NFV Configuring NIC partitioning The YAML file, os-net-config.yaml , has been changed to, roles_data.yaml . 05 October 2023 17.1 NFV Registering and enabling repositories The repository name, openstack-for-rhel-9-x86_64-rpms , has been changed to, openstack-17.1-for-rhel-9-x86_64-rpms . 04 October 2023 17.1 Networking Enabling custom composable networks Removing an overcloud stack Replaced networks definition file, network_data.yaml , with network_data_v2.yaml . 02 October 2023 17.1 Networking Configuring the leaf networks Installing and configuring the undercloud for RHOSP dynamic routing The FrrBgpAsn and FrrOvnBgpAgentAsn parameters are now role-based. There is a new parameter, tripleo_frr_ovn_bgp_agent_enable . 29 September 2023 17.1 Security Enabling FIPS Corrected procedure so that FIPS images are uploaded to glance. 27 September 2023 17.1 Networking Constraints for RHOSP dynamic routing Added item that describes high connectivity downtime during an FRR update for RHOSP dynamic routing environments. 27 September 2023 17.1, 17.0, 16.2, 16.1 Compute Adding dynamic metadata to instances The configuration for dynamic metadata you use in your Compute environment file has been updated. 26 September 2023 17.1 Networking Limiting queries to the metadata service A new procedure, "Limiting queries to the metadata service," has been added to Configuring Red Hat OpenStack Platform networking . 25 September 2023 17.1 NFV Configuring network functions virtualization Configuring network functions virtualization has been updated with content from what was the Network Functions Virtualization Product Guide . 22 September 2023 17.1 Deployment Deploying Red Hat OpenStack Platform at scale This guide is being reviewed and will be republished on the Customer Portal when the reviewed content is available for enterprise use. 20 September 2023 17.1 Networking Creating custom virtual routers with router flavors Added a chapter for the Technology Preview of the router flavors feature. 19 September 2023 17.1 Updates Rebooting Compute nodes Added note stating that only cold migration is supported when migrating virtual machines from RHEL 9.2 to RHEL 8.4 in a Multi-RHEL environment. 19 September 2023 17.1 Upgrades Creating roles for Multi-RHEL Compute nodes Upgrading the Compute node operating system Updated the parameter that sets the RHEL version on Compute nodes in a Multi-RHEL environment. Removed step to modify the skip_rhel_release.yaml file. 18 September 2023 17.1 Edge Deploying the central site with storage Deploying edge nodes without storage Deploying edge sites with hyperconverged storage Updating the central location Delete the DistributedComputeHCI node Deploying the central controllers without edge storage Removed mentions of heat template podman.yaml, which is no longer needed. 18 September 2023 17.1 Updates Validating RHOSP before the undercloud update Validating RHOSP after the overcloud update Added procedures to validate your RHOSP environment before the undercloud update and after the overcloud update. 12 September 2023 17.1 Networking link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_migrating-virtual-machine-instances-between-compute-nodes_migrating-instances#con_migration-constraints_migrating-instances Removed the sub-section, "Packet loss on ML2/OVN deployments" from the section, "16.3.2. Migration constraints," in the Configuring the Compute service for instance creation guide. 12 September 2023 17.1 NFV Chapter 6. Preparing network functions virtualization (NFV) A new chapter, "Chapter 6. Preparing network functions virtualization (NFV)," has been added to the Framework for upgrades (16.2 to 17.1) guide. 11 September 2023 17.1 Networking Chapter 20. Replacing Controller nodes Changes made to Chapter 20 to address the OVN database partition issue described in BZ 2222543 08 September 2023 17.1 Backup and Restore Backing up and restoring the undercloud and control plane nodes Backing up and restoring the undercloud and control plane nodes is now published. 07 September 2023 17.1 Networking QoS rules To Table 9.1, added footnote (#8) stating that RHOSP does not support QoS for trunk ports. 31 August 2023 17.1 Networking Configuring the leaf networks and Configuring the leaf networks Renamed section 4.4 to "Configuring the leaf networks" in both the Configuring dynamic routing in Red Hat OpenStack Platform guide and the Configuring spine-leaf networking guide. 30 August 2023 17.1 Networking Overview of allowed address pairs Added a definition for a virtual port (vport). 30 August 2023 17.1 Security Creating images Removed deprecated example for building images and replaced with link to image builder documentation 29 August 2023 17.1 Documentation Command line interface reference Configuration reference Overcloud parameters New editions published. 28 August 2023 17.1 NFV Planning for your RT-KVM Compute nodes The repositories listed in this procedure have changed. 28 August 2023 17.1 NFV Registering and enabling repositories The repositories listed in this procedure have changed. 25 August 2023 17.1 Firewall rules for Red Hat OpenStack Platform Firewall rules for Red Hat OpenStack Platform New edition published. 22 August 2023 17.1 NFV Configuring OVS PMD Auto Load Balance The OVS Poll Mode Driver (PMD) automatic load balancing feature graduated from Technology Preview to full support. Also, the configuration procedure changed. 21 August 2023 17.1 DCN Considerations for networking on DCN architecture The RHOSP Load-balancing service (octavia) is no longer listed as unsupported in a DCN environment. 16 August 2023 17.1 Documentation Configuring dynamic routing in Red Hat OpenStack Platform A new guide for RHOSP 17.1. 16 August 2023 17.1 Documentation Network Functions Virtualization Product Guide This guide is being reviewed and will be added after the initial release. This list of unpublished guides will be updated when these guides are published. 16 August 2023 17.1 Documentation Documentation library updates Updated the titles for some of the guides from the 17.0 title. 16 August 2023 17.1 Documentation Backing up Block Storage volumes This guide has been updated, restructured and rewritten. The cinder CLI commands have been replaced with openstack CLI commands, where possible. Table 4.3. Documentation library title changes title Current title Bare Metal Provisioning Configuring the Bare Metal Provisioning service Block Storage Backup Guide Backing up Block Storage volumes Custom Block Storage Back End Deployment Guide Deploying a custom Block Storage back end Deployment Recommendations for Specific Red Hat OpenStack Platform Services Removed. Installing and managing Red Hat OpenStack Platform with director Installing and managing Red Hat OpenStack Platform with director Distributed compute node and storage deployment Deploying a Distributed Compute Node (DCN) architecture External Load Balancing for the Overcloud Content moved to Managing high availability services . High Availability Deployment and Usage Managing high availability services High Availability for Compute Instances Configuring high availability for instances Hyperconverged Infrastructure Guide Deploying a hyperconverged infrastructure Introduction to the OpenStack Dashboard Managing cloud resources with the Openstack Dashboard IPv6 networking for the overcloud Configuring IPv6 networking for the overcloud Keeping Red Hat OpenStack Platform Updated Performing a minor update of Red Hat OpenStack Platform Network Functions Virtualization Planning and Configuration Guide Configuring network functions virtualization Network Functions Virtualization Product Guide Removed. Content will be moved to Configuring network functions virtualization . Networking Guide Configuring Red Hat OpenStack Platform networking OpenStack Integration Test Suite Guide Validating your cloud with the Red Hat OpenStack Platform Integration Test Suite Operational Measurements Managing overcloud observability Partner Integration Removed. Product Guide Introduction to Red Hat OpenStack Platform Recommendations for Large Deployments Deploying Red Hat OpenStack Platform at scale RHOSP director Operator for OpenShift Container Platform Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator Security and Hardening Guide Hardening Red Hat OpenStack Platform Spine Leaf Networking Configuring spine-leaf networking Standalone Deployment Guide Removed. Storage Guide Configuring persistent storage Testing Migration of the Networking Service to the ML2/OVN Mechanism Driver Migrating to the OVN mechanism driver Transitioning to Containerized Services Removed Users and Identity Management Guide Managing OpenStack Identity resources Using Designate for DNS-as-a-Service Configuring DNS as a service Using Octavia for Load Balancing-as-a-Service Configuring load balancing as a service
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/release_notes/doc-changes_rhosp-relnotes
|
Chapter 5. RoleBinding [authorization.openshift.io/v1]
|
Chapter 5. RoleBinding [authorization.openshift.io/v1] Description RoleBinding references a Role, but not contain it. It can reference any Role in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required subjects roleRef 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources groupNames array (string) GroupNames holds all the groups directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata roleRef ObjectReference RoleRef can only reference the current namespace and the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error. Since Policy is a singleton, this is sufficient knowledge to locate a role. subjects array (ObjectReference) Subjects hold object references to authorize with this rule. This field is ignored if UserNames or GroupNames are specified to support legacy clients and servers. Thus newer clients that do not need to support backwards compatibility should send only fully qualified Subjects and should omit the UserNames and GroupNames fields. Clients that need to support backwards compatibility can use this field to build the UserNames and GroupNames. userNames array (string) UserNames holds all the usernames directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. 5.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/rolebindings GET : list objects of kind RoleBinding /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings GET : list objects of kind RoleBinding POST : create a RoleBinding /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings/{name} DELETE : delete a RoleBinding GET : read the specified RoleBinding PATCH : partially update the specified RoleBinding PUT : replace the specified RoleBinding 5.2.1. /apis/authorization.openshift.io/v1/rolebindings HTTP method GET Description list objects of kind RoleBinding Table 5.1. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty 5.2.2. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings HTTP method GET Description list objects of kind RoleBinding Table 5.2. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBinding Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body RoleBinding schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 202 - Accepted RoleBinding schema 401 - Unauthorized Empty 5.2.3. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the RoleBinding HTTP method DELETE Description delete a RoleBinding Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBinding Table 5.9. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBinding Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBinding Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body RoleBinding schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/role_apis/rolebinding-authorization-openshift-io-v1
|
Preface
|
Preface Depending on the type of your deployment, you can choose one of the following procedures to replace a storage device: For dynamically created storage clusters deployed on AWS, see: Section 1.1, "Replacing operational or failed storage devices on AWS user-provisioned infrastructure" . Section 1.2, "Replacing operational or failed storage devices on AWS installer-provisioned infrastructure" . For dynamically created storage clusters deployed on VMware, see Section 2.1, "Replacing operational or failed storage devices on VMware infrastructure" . For dynamically created storage clusters deployed on Microsoft Azure, see Section 3.1, "Replacing operational or failed storage devices on Azure installer-provisioned infrastructure" . For storage clusters deployed using local storage devices, see: Section 5.1, "Replacing operational or failed storage devices on clusters backed by local storage devices" . Section 5.2, "Replacing operational or failed storage devices on IBM Power" . Section 5.3, "Replacing operational or failed storage devices on IBM Z or IBM LinuxONE infrastructure" . Note OpenShift Data Foundation does not support heterogeneous OSD sizes.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/replacing_devices/preface-replacing-devices
|
Chapter 7. Managing network policies
|
Chapter 7. Managing network policies A Kubernetes network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. These network policies are configured as YAML files. By looking at these files alone, it is often hard to identify whether the applied network policies achieve the desired network topology. Red Hat Advanced Cluster Security for Kubernetes (RHACS) gathers all defined network policies from your orchestrator and provides tools to make these policies easier to use. To support network policy enforcement, RHACS provides the following tools: Network graph Network policy generator Network policy simulator Build-time network policy generator 7.1. Network graph 7.1.1. About the network graph The network graph provides high-level and detailed information about deployments, network flows, and network policies in your environment. RHACS processes all network policies in each secured cluster to show you which deployments can contact each other and which can reach external networks. It also monitors running deployments and tracks traffic between them. You can view the following items in the network graph: Internal entities These represent a connection between a deployment and an IP address that belongs to the private address space as defined in RFC 1918 . For more information, see "Connections involving internal entities". External entities These represent a connection between a deployment and an IP address that does not belong to the private address space as defined in RFC 1918 . For more information, see "External entities and connections in the network graph". Network components From the top menu, you can select namespaces (indicated by the NS label) and deployments (indicated by the D label) to display on the graph for a chosen cluster (indicated by the CL label). You can further filter deployments by using the drop-down list and selecting criteria on which to filter, such as common vulnerabilities and exposures (CVEs), labels, and images. Network flows You can select one of the following flows for the graph: Active traffic Selecting this default option shows observed traffic, focused on the namespace or specific deployment that you selected. You can select the time period for which to display information. Inactive flows Selecting this option shows potential flows allowed by your network policies, helping you identify missing network policies needed to achieve tighter isolation. You can select the time period for which to display information. Network policies You can view existing policies for a selected component or view components that have no policies. You can also simulate network policies from the network graph view. See "Simulating network policies from the network graph" for more information. 7.1.1.1. Displays, navigation, and the user interface in the network graph You can use the network graph, shown in the following graphic, to click on items and view additional information about them. You can also perform actions within the graph such as adding a network flow to your baseline. Figure 7.1. Network graph example The following tips can help you use the network graph: Opening the legend provides information about the symbols in use and their meaning. The legend shows explanatory text for symbols representing namespaces, deployments, and connections on the network graph. Selecting additional display options from the drop-down list controls whether the graph displays icons such as the network policy status badge, active external traffic badge, and port and protocol labels for edge connections. RHACS detects changes in network traffic, such as nodes joining or leaving. If changes are detected, the network graph displays a notification showing the number of updates available. To avoid interrupting your focus, the graph is not updated automatically. Click the notification to update the graph. When you click an item in the graph, the rearranged side panel with collapsible sections presents information about that item. You can click on the following items: Deployments Namespaces External entities CIDR blocks External groups The side panel displays relevant information based on the item in the graph that you have selected. The D or NS label to the item name in the header (in this example, "visa-processor") indicates if it is a deployment or a namespace. The following example illustrates the side panel for a deployment. Figure 7.2. Side panel for a deployment example When viewing a namespace, the side panel includes a search bar and a list of deployments. You can click on a deployment to view its information. The side panel also includes a Network policies tab. From this tab, you can view, copy to the clipboard, or export any network policy defined in that namespace, as shown in the following example. Figure 7.3. Side panel for a namespace example 7.1.1.2. External entities and connections in the network graph The network graph view shows network connections between managed clusters and external sources. In addition, RHACS automatically discovers and highlights public Classless Inter-Domain Routing (CIDR) address blocks, such as Google Cloud, AWS, Microsoft Azure, Oracle Cloud, and Cloudflare. Using this information, you can identify deployments with active external connections and decide if they are making or receiving unauthorized connections from outside your network. By default, the external connections point to a common External Entities icon and different CIDR address blocks in the network graph. However, you can choose not to show auto-discovered CIDR blocks by clicking Manage CIDR blocks and deselecting Auto-discovered CIDR blocks . RHACS includes IP ranges for the following cloud providers: Google Cloud AWS Microsoft Azure Oracle Cloud Cloudflare RHACS fetches and updates the cloud providers' IP ranges every 7 days, and updates CIDR blocks daily. If you are using offline mode, you can update these ranges by installing new support packages. The following image provides an example of the network graph. In this example, based on the options that the user has chosen, the graph depicts deployments in the selected namespace. Traffic flows are not displayed until you click on an item such as a deployment. The graph uses a red badge to indicate deployments that are missing policies and therefore allowing all network traffic. 7.1.1.3. Connections involving internal entities The network graph is useful for identifying deployments with active connections to entities that do not belong to any known deployment or CIDR block. Some of these connections never reach outside of the cluster and are made within the cluster's private network. The network graph represents those as connections to or from internal entities . Connections with internal entities represent a connection between a deployment and an IP address that belongs to the private address space as defined in RFC 1918 . In some cases, Sensor is unable to identify one or both deployments involved in a connection. In that case, the system analyzes the IP address and decides whether the connection is internal or external. The following scenarios can lead to a connection being categorized as one involving internal entities: A change of IP address or the deletion of a deployment accepting connections (the server) while the party initiating the connection (the client) still attempts to reach it A deployment communicating with the orchestrator API A deployment communicating using a networking CNI plugin, for example, Calico A restart of Sensor, resulting in a reset of the mapping of IP addresses to past deployments, for example, when Sensor does not recognize the IP addresses of past entities or past IP addresses of existing entities A connection that involves an entity not managed by the orchestrator (in some cases, that might be seen as outside of the cluster ) but is using an IP address from the private address space as defined in RFC 1918 Internal entities are indicated with an icon as shown in the following graphic. Clicking on Internal entities shows the flows for these entities. Figure 7.4. Internal entities example 7.1.2. Access control and permissions To view network graphs, the user must have at least the permissions granted to the Network Graph Viewer default permission set. The following permissions are granted to the Network Graph Viewer permission set: Read Deployment Read NetworkGraph Read NetworkPolicy For more information, see "System permission sets" in the "Additional resources" section. Additional resources System permission sets 7.1.3. Viewing deployment information The network graph provides a visual map of deployments, namespaces, and connections that RHACS has discovered. By clicking on a deployment in the graph, you can view information about the deployment, including the following details: Network security, such as the number of flows, existing or missing network policy rules, and listening ports Labels and annotations Port configurations Container information Anomalous and baseline flows for ingress and egress connections, including protocols and port numbers Network policies Procedure To view details for deployments in a namespace: In the RHACS portal, go to Network Graph and select your cluster from the drop-down list. Click the Namespaces list and use the search field to locate a namespace, or select individual namespaces. Click the Deployments list and use the search field to locate a deployment, or select individual deployments to display in the network graph. In the network graph, click on a deployment to view the information panel. Click the Details , Flows , Baseline , or Network policies tab to view the corresponding information. 7.1.4. Viewing network policies in the network graph Network policies specify how groups of pods are allowed to communicate with each other and with other network endpoints. Kubernetes NetworkPolicy resources use labels to select pods and define rules that specify what traffic is allowed to or from the selected pods. RHACS discovers and displays network policy information for all your Kubernetes clusters, namespaces, deployments, and pods, in the network graph. Procedure In the RHACS portal, go to Network Graph and select your cluster from the drop-down list. Click the Namespaces list and select individual namespaces, or use the search field to locate a namespace. Click the Deployments list and select individual deployments, or use the search field to locate a deployment. In the network graph, click on a deployment to view the information panel. In the Details tab, in the Network security section, you can view summary messages about network policy rules that give the following information: If policies exist in the network that regulate ingress or egress traffic If your network is missing policies and is therefore allowing all ingress or egress traffic To view the YAML file for the network policies, you can click on the policy rule, or click the Network policies tab. 7.1.5. Configuring CIDR blocks in the network graph You can specify custom CIDR blocks or configure the display of auto-discovered CIDR blocks in the network graph. Procedure In the RHACS portal, go to Network Graph , and then select Manage CIDR Blocks . You can perform the following actions: Toggle Auto-discovered CIDR blocks to hide auto-discovered CIDR blocks in the network graph. Note When you hide the auto-discovered CIDR blocks, the auto-discovered CIDR blocks are hidden for all clusters, and not only for the selected cluster in the network graph. Add a custom CIDR block to the graph by performing the following steps: Enter the CIDR name and CIDR address in the fields. To add additional CIDR blocks, click Add CIDR block and enter information for each block. Click Update Configuration to save the changes. 7.2. Using the network graph to generate and simulate network policies 7.2.1. About generating policies from the network graph A Kubernetes network policy controls which pods receive incoming network traffic, and which pods can send outgoing traffic. By using network policies to enable and disable traffic to or from pods, you can limit your network attack surface. These network policies are YAML configuration files. It is often difficult to gain insights into the network flow and manually create these files. You can use RHACS to generate these files. When you automatically generate network policies, RHACS follows these guidelines: RHACS generates a single network policy for each deployment in the namespace. The pod selector for the policy is the pod selector of the deployment. If a deployment already has a network policy, RHACS does not generate new policies or delete existing policies. Generated policies only restrict traffic to existing deployments. Deployments that you create later will not have any restrictions unless you create or generate new network policies for them. If a new deployment needs to contact a deployment with a network policy, you might need to edit the network policy to allow access. Each policy has the same name as the deployment name, prefixed with stackrox-generated- . For example, the policy name for the deployment depABC in the generated network policy is stackrox-generated-depABC . All generated policies also have an identifying label. RHACS generates a single rule allowing traffic from any IP address if one of the following conditions are met: The deployment has an incoming connection from outside the cluster within the selected time The deployment is exposed through a node port or load balancer service RHACS generates one ingress rule for every deployment from which there is an incoming connection. For deployments in the same namespace, this rule uses the pod selector labels from the other deployment. For deployments in different namespaces, this rule uses a namespace selector. To make this possible, RHACS automatically adds a label, namespace.metadata.stackrox.io/name , to each namespace. Important In rare cases, if a standalone pod does not have any labels, the generated policy allows traffic from or to the pod's entire namespace. 7.2.2. Generating network policies in the network graph RHACS lets you automatically generate network policies based on the actual observed network communication flows in your environment. You can generate policies based on the cluster, namespaces, and deployments that you have selected in the network graph. Policies are generated for any deployments that are included in the current Network Graph scope. For example, the current scope could include the entire cluster, a cluster and namespaces, or individually selected deployments in the selected namespaces. You can also further reduce the scope by applying one of the filters from the Filter deployments field with any combination of the cluster, namespace, and deployment selections. For example, you could narrow the scope to deployments in a specific cluster and namespace that are affected by a specific CVE. Policies are generated from the traffic observed during the baseline discovery period. In the RHACS portal, go to Network Graph . Select a cluster, and then select one or more namespaces. Optional: Select individual deployments to restrict the policy generated to only those deployments. You can also use the Filter deployments feature to further narrow the scope. In the network graph header, select Network policy generator . Optional: In the information panel that opens, select Exclude ports & protocols to remove the port/protocol restrictions when generating network policies from a baseline. As an example, the nginx3 deployment makes a port 80 connection to nginx4 , and this is included as part of the baseline for nginx4 . If policies are generated and this checkbox is not selected (the default behavior), the generated policy will restrict the allowed connections from nginx3 to nginx4 to only port 80. If policies are generated with this option selected, the generated policy will allow any port in the connection from nginx3 to nginx4 . Click Generate and simulate network policies . RHACS generates policies for the scope that you have chosen. This scope is displayed at the top of the Generate network policies panel. Note Clicking on the deployment information in the scope displays a list of the deployments that are included. Optional: Copy the generated network policy configuration YAML file to the clipboard or download it by clicking the download icon in the panel. Optional: To compare the generated network policies to the existing network policies, click Compare . The YAML files for existing and generated network policies are shown in a side-by-side view. Note Some items do not have generated policies, such as namespaces with existing ingress policies or deployments in certain protected namespaces such as as stackrox or acs . Optional: Click the Actions menu to perform the following activities: Share the YAML file with notifiers: Sends the YAML file to one of the system notifiers you have configured, for example, Slack, ServiceNow, or an application that uses generic webhooks. These notifiers are configured by navigating to Platform Configuration Integrations . See the documentation in the "Additional resources" section for more information. Rebuild rules from active traffic: Refreshes the generated policies that are displayed. Revert rules to previously applied YAML: Removes the simulated policy and reverts to the last network policy. 7.2.3. Saving generated policies in the network graph You can download and save the generated network policies from RHACS. Use this option to download policies so that you can commit the policies into a version control system such as Git. Procedure After generating a network policy, click the Download YAML icon in the Network Policy Simulator panel. 7.2.4. Testing generated policies in the network graph After you download the network policies that RHACS generates, you can test them by applying them to your cluster by using the CLI or your automated deployment procedures. You cannot apply generated network policies directly in the network graph. Procedure To create policies using the saved YAML file, run the following command: USD oc create -f "<generated_file>.yml" 1 1 If you use Kubernetes, enter kubectl instead of oc . If the generated policies cause problems, you can remove them by running the following command: USD oc delete -f "<generated_file>.yml" 1 1 If you use Kubernetes, enter kubectl instead of oc . Warning Directly applying network policies might cause problems for running applications. Always download and test the network policies in a development environment or test clusters before applying them to production workloads. 7.2.5. Reverting to a previously applied policy in the network graph You can remove a policy and revert to a previously applied policy. Procedure In the RHACS portal, go to Network Graph . Select a cluster name from the menu on the top bar. Select one or more namespaces and deployments. Select Simulate network policy . Select View active YAMLS . From the Actions menu, select Revert rules to previously applied YAML . Warning Directly applying network policies might cause problems for running applications. Always download and test the network policies in a development environment or test clusters before applying them to production workloads. 7.2.6. Deleting all policies autogenerated in the network graph You can delete all automatically generated policies from your cluster that you have created by using RHACS. Procedure Run the following command: USD oc get ns -o jsonpath='{.items[*].metadata.name}' | \ xargs -n 1 oc delete networkpolicies -l \ 'network-policy-generator.stackrox.io/generated=true' -n 1 1 If you use Kubernetes, enter kubectl instead of oc . 7.2.7. Simulating network policies from the network graph Your current network policies might allow unneeded network communications. You can use the network policy generator to create network policies that restrict ingress traffic to the computed baselines for a set of deployments. Note The Network Graph does not display the generated policies in the visualization. Generated policies are only for ingress traffic and policies that restrict egress traffic are not generated. Procedure In the RHACS portal, go to Network Graph . Select a cluster, and then select one or more namespaces. On the network graph header, select Network policy generator . Optional: To generate a YAML file with network policies to use in the simulation, click Generate and simulate network policies . For more information, see "Generating network policies in the network graph". Upload a YAML file of a network policy that you want to use in the simulation. The network graph view displays what your proposed network policies would achieve. Perform the following steps: Click Upload YAML and then select the file. Click Open . The system displays a message to indicate the processing status of the uploaded policy. You can view active YAML files that correspond to the current network policies by clicking the View active YAMLS tab, and then selecting policies from the drop-down list. You can also perform the following actions: Click the appropriate button to copy or download the displayed YAML file. Use the Actions menu to rebuild rules from active traffic or revert rules to a previously applied YAML. For more information, see "Generating network policies in the network graph". Additional resources Updating kernel support packages in offline mode Integrating using generic webhooks 7.3. About network baselining in the network graph In RHACS, you can minimize your risks by using network baselining. It is a proactive approach to keep your infrastructure secure. RHACS first discovers existing network flows and creates a baseline, and then it treats network flows outside of this baseline as anomalous. When you install RHACS, there is no default network baseline. As RHACS discovers network flows, it creates a baseline and then it adds all discovered network flows to it, following these guidelines: When RHACS discovers new network activity, it adds that network flow to the network baseline. Network flows do not show up as anomalous flows and do not trigger any violations. After the discovery phase, the following actions occur: RHACS stops adding network flows to the network baselines. New network flows that are not in the network baseline show up as anomalous flows but they do not trigger any violations. 7.3.1. Viewing network baselines from the network graph You can view network baselines from the network graph view. Procedure Click the Namespaces list and use the search field to locate a namespace, or select individual namespaces. Click the Deployments list and use the search field to locate a deployment, or select individual deployments to display in the network graph. In the network graph, click on a deployment to view the information panel. Select the Baseline tab. Use the filter by entity name field to further restrict the flows that are displayed. Optional: You can mark baseline flows as anomalous by performing one of the following actions: Select an individual entity. Click the overflow menu, , and then select Mark as anomalous . Select multiple entities, and then click Bulk actions and select Mark as anomalous . Optional: Check the box to exclude ports and protocols. Optional: To save the baseline as a network policy YAML file, click Download baseline as network policy . 7.3.2. Downloading network baselines from the network graph You can download network baselines as YAML files from the network graph view. Procedure In the RHACS portal, go to Network Graph . Click the Namespaces list and use the search field to locate a namespace, or select individual namespaces. Click the Deployments list and use the search field to locate a deployment, or select individual deployments to display in the network graph. In the network graph, click on a deployment to view the information panel. The Baseline tab lists the baseline flows. Use the filter by entity name field to further restrict the list of flows. Optional: Check the box to exclude ports and protocols. Click Download baseline as network policy . 7.3.3. Configuring network baselining time frame You can use the ROX_NETWORK_BASELINE_OBSERVATION_PERIOD and the ROX_BASELINE_GENERATION_DURATION environment variables to configure the observation period and the network baseline generation duration. Procedure Set the ROX_NETWORK_BASELINE_OBSERVATION_PERIOD environment variable by running the following command: USD oc -n stackrox set env deploy/central \ 1 ROX_NETWORK_BASELINE_OBSERVATION_PERIOD=<value> 2 1 If you use Kubernetes, enter kubectl instead of oc . 2 Value must be time units, for example: 300ms , -1.5h , or 2h45m . Valid time units are ns , us or ms , ms , s , m , h . Set the ROX_BASELINE_GENERATION_DURATION environment variable by running the following command: USD oc -n stackrox set env deploy/central \ 1 ROX_BASELINE_GENERATION_DURATION=<value> 2 1 If you use Kubernetes, enter kubectl instead of oc . 2 Value must be time units, for example: 300ms , -1.5h , or 2h45m . Valid time units are ns , us or ms , ms , s , m , h . 7.3.4. Enabling alerts on baseline violations in the network graph You can configure RHACS to detect anomalous network flows and trigger violations for traffic that is not in the baseline. This can help you determine if the network contains unwanted traffic before you block traffic with a network policy. Procedure Click the Namespaces list and use the search field to locate a namespace, or select individual namespaces. Click the Deployments list and use the search field to locate a deployment, or select individual deployments to display in the network graph. In the network graph, click on a deployment to view the information panel. In the Baseline tab, you can view baseline flows. Use the filter by entity name field to further restrict the flows that are displayed. Toggle the Alert on baseline violations option. After you toggle the Alert on baseline violations option, anomalous network flows trigger violations. You can toggle the Alert on baseline violations option again to stop receiving violations for anomalous network flows.
|
[
"oc create -f \"<generated_file>.yml\" 1",
"oc delete -f \"<generated_file>.yml\" 1",
"oc get ns -o jsonpath='{.items[*].metadata.name}' | xargs -n 1 oc delete networkpolicies -l 'network-policy-generator.stackrox.io/generated=true' -n 1",
"oc -n stackrox set env deploy/central \\ 1 ROX_NETWORK_BASELINE_OBSERVATION_PERIOD=<value> 2",
"oc -n stackrox set env deploy/central \\ 1 ROX_BASELINE_GENERATION_DURATION=<value> 2"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/manage-network-policies
|
Chapter 7. Configuring HA cluster resources on Red Hat OpenStack Platform
|
Chapter 7. Configuring HA cluster resources on Red Hat OpenStack Platform The following table lists the RHOSP-specific resource agents you use to configure resources for an HA cluster on RHOSP. openstack-info (required) Provides support for RHOSP-specific resource agents. You must configure an openstack-info resource as a cloned resource for your cluster in order to run any other RHOSP-specific resource agent other than the fence_openstack fence agent. For information about configuring an openstack-info resource see Configuring an openstack-info resource in an HA cluster on Red Hat OpenStack Platform . openstack-virtual-ip Configures a virtual IP address resource. For information about configuring an openstack-virtual-ip resource, see Configuring a virtual IP address in an HA cluster on Red Hat Openstack Platform . openstack-floating-ip Configures a floating IP address resource. For information about configuring an openstack-floating-ip resource, see Configuring a floating IP address in an HA cluster on Red Hat OpenStack Platform . openstack-cinder-volume Configures a block storage resource. For information about configuring an openstack-cinder-volume resource, see Configuring a block storage resource in an HA cluster on Red Hat OpenStack Platform . When configuring other cluster resources, use the standard Pacemaker resource agents. 7.1. Configuring an openstack-info resource in an HA cluster on Red Hat OpenStack Platform (required) You must configure an openstack-info resource in order to run any other RHOSP-specific resource agent except for the fence_openstack fence agent. This procedure to create an openstack-info resource uses a clouds.yaml file for RHOSP authentication. Prerequisites A configured HA cluster running on RHOSP Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP Procedure Complete the following steps from any node in the cluster. To view the options for the openstack-info resource agent, run the following command. Create the openstack-info resource as a clone resource. In this example, the resource is also named openstack-info . This example uses a clouds.yaml configuration file and the cloud= parameter is set to the name of the cloud in your clouds.yaml file. Check the cluster status to verify that the resource is running. 7.2. Configuring a virtual IP address in an HA cluster on Red Hat Openstack Platform This procedure to create an RHOSP virtual IP address resource for an HA cluster on an RHOSP platform uses a clouds.yaml file for RHOSP authentication. The RHOSP virtual IP resource operates in conjunction with an IPaddr2 cluster resource. When you configure an RHOSP virtual IP address resource, the resource agent ensures that the RHOSP infrastructure associates the virtual IP address with a cluster node on the network. This allows an IPaddr2 resource to function on that node. Prerequisites A configured HA cluster running on RHOSP An assigned IP address to use as the virtual IP address Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP Procedure Complete the following steps from any node in the cluster. To view the options for the openstack-virtual-ip resource agent, run the following command. Run the following command to determine the subnet ID for the virtual IP address you are using. In this example, the virtual IP address is 172.16.0.119. Create the RHOSP virtual IP address resource. The following command creates an RHOSP virtual IP address resource for an IP address of 172.16.0.119, specifying the subnet ID you determined in the step. Configure ordering and location constraints: Ensure that the openstack-info resource starts before the virtual IP address resource. Ensure that the Virtual IP address resource runs on the same node as the openstack-info resource. Create an IPaddr2 resource for the virtual IP address. Configure ordering and location constraints to ensure that the openstack-virtual-ip resource starts before the IPaddr2 resource and that the IPaddr2 resource runs on the same node as the openstack-virtual-ip resource. Verification Verify the resource constraint configuration. Check the cluster status to verify that the resources are running. 7.3. Configuring a floating IP address in an HA cluster on Red Hat OpenStack Platform The following procedure creates a floating IP address resource for an HA cluster on RHOSP. This procedure uses a clouds.yaml file for RHOSP authentication. Prerequisites A configured HA cluster running on RHOSP An IP address on the public network to use as the floating IP address, assigned by the RHOSP administrator Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP Procedure Complete the following steps from any node in the cluster. To view the options for the openstack-floating-ip resource agent, run the following command. Find the subnet ID for the address on the public network that you will use to create the floating IP address resource. The public network is usually the network with the default gateway. Run the following command to display the default gateway address. Run the following command to find the subnet ID for the public network. This command generates a table with ID and Subnet headings. Create the floating IP address resource, specifying the public IP address for the resource and the subnet ID for that address. When you configure the floating IP address resource, the resource agent configures a virtual IP address on the public network and associates it with a cluster node. Configure an ordering constraint to ensure that the openstack-info resource starts before the floating IP address resource. Configure a location constraint to ensure that the floating IP address resource runs on the same node as the openstack-info resource. Verification Verify the resource constraint configuration. Check the cluster status to verify that the resources are running. 7.4. Configuring a block storage resource in an HA cluster on Red Hat OpenStack Platform The following procedure creates a block storage resource for an HA cluster on RHOSP. This procedure uses a clouds.yaml file for RHOSP authentication. Prerequisites A configured HA cluster running on RHOSP A block storage volume created by the RHOSP administrator Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP Procedure Complete the following steps from any node in the cluster. To view the options for the openstack-cinder-volume resource agent, run the following command. Determine the volume ID of the block storage volume you are configuring as a cluster resource. Run the following command to display a table of available volumes that includes the UUID and name of each volume. If you already know the volume name, you can run the following command, specifying the volume you are configuring. This displays a table with an ID field. Create the block storage resource, specifying the ID for the volume. Configure an ordering constraint to ensure that the openstack-info resource starts before the block storage resource. Configure a location constraint to ensure that the block storage resource runs on the same node as the openstack-info resource. Verification Verify the resource constraint configuration. Check the cluster status to verify that the resource is running.
|
[
"pcs resource describe openstack-info",
"pcs resource create openstack-info openstack-info cloud=\"ha-example\" clone",
"pcs status Full List of Resources: * Clone Set: openstack-info-clone [openstack-info]: * Started: [ node01 node02 node03 ]",
"pcs resource describe openstack-virtual-ip",
"openstack --os-cloud=ha-example subnet list +--------------------------------------+ ... +----------------+ | ID | ... | Subnet | +--------------------------------------+ ... +----------------+ | 723c5a77-156d-4c3b-b53c-ee73a4f75185 | ... | 172.16.0.0/24 | +--------------------------------------+ ... +----------------+",
"pcs resource create ClusterIP-osp ocf:heartbeat:openstack-virtual-ip cloud=ha-example ip=172.16.0.119 subnet_id=723c5a77-156d-4c3b-b53c-ee73a4f75185",
"pcs constraint order start openstack-info-clone then ClusterIP-osp Adding openstack-info-clone ClusterIP-osp (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint colocation add ClusterIP-osp with openstack-info-clone score=INFINITY",
"pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=172.16.0.119",
"pcs constraint order start ClusterIP-osp then ClusterIP Adding ClusterIP-osp ClusterIP (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint colocation add ClusterIP with ClusterIP-osp",
"pcs constraint config Location Constraints: Ordering Constraints: start ClusterIP-osp then start ClusterIP (kind:Mandatory) start openstack-info-clone then start ClusterIP-osp (kind:Mandatory) Colocation Constraints: ClusterIP with ClusterIP-osp (score:INFINITY) ClusterIP-osp with openstack-info-clone (score:INFINITY)",
"pcs status . . . Full List of Resources: * fenceopenstack (stonith:fence_openstack): Started node01 * Clone Set: openstack-info-clone [openstack-info]: * Started: [ node01 node02 node03 ] * ClusterIP-osp (ocf::heartbeat:openstack-virtual-ip): Started node03 * ClusterIP (ocf::heartbeat:IPaddr2): Started node03",
"pcs resource describe openstack-floating-ip",
"route -n | grep ^0.0.0.0 | awk '{print USD2}' 172.16.0.1",
"openstack --os-cloud=ha-example subnet list +-------------------------------------+---+---------------+ | ID | | Subnet +-------------------------------------+---+---------------+ | 723c5a77-156d-4c3b-b53c-ee73a4f75185 | | 172.16.0.0/24 | +--------------------------------------+------------------+",
"pcs resource create float-ip openstack-floating-ip cloud=\"ha-example\" ip_id=\"10.19.227.211\" subnet_id=\"723c5a77-156d-4c3b-b53c-ee73a4f75185\"",
"pcs constraint order start openstack-info-clone then float-ip Adding openstack-info-clone float-ip (kind: Mandatory) (Options: first-action=start then-action=start",
"pcs constraint colocation add float-ip with openstack-info-clone score=INFINITY",
"pcs constraint config Location Constraints: Ordering Constraints: start openstack-info-clone then start float-ip (kind:Mandatory) Colocation Constraints: float-ip with openstack-info-clone (score:INFINITY)",
"pcs status . . . Full List of Resources: * fenceopenstack (stonith:fence_openstack): Started node01 * Clone Set: openstack-info-clone [openstack-info]: * Started: [ node01 node02 node03 ] * float-ip (ocf::heartbeat:openstack-floating-ip): Started node02",
"pcs resource describe openstack-cinder-volume",
"openstack --os-cloud=ha-example volume list | ID | Name | | 23f67c9f-b530-4d44-8ce5-ad5d056ba926| testvolume-cinder-data-disk |",
"openstack --os-cloud=ha-example volume show testvolume-cinder-data-disk",
"pcs resource create cinder-vol openstack-cinder-volume volume_id=\"23f67c9f-b530-4d44-8ce5-ad5d056ba926\" cloud=\"ha-example\"",
"pcs constraint order start openstack-info-clone then cinder-vol Adding openstack-info-clone cinder-vol (kind: Mandatory) (Options: first-action=start then-action=start",
"pcs constraint colocation add cinder-vol with openstack-info-clone score=INFINITY",
"pcs constraint config Location Constraints: Ordering Constraints: start openstack-info-clone then start cinder-vol (kind:Mandatory) Colocation Constraints: cinder-vol with openstack-info-clone (score:INFINITY)",
"pcs status . . . Full List of Resources: * Clone Set: openstack-info-clone [openstack-info]: * Started: [ node01 node02 node03 ] * cinder-vol (ocf::heartbeat:openstack-cinder-volume): Started node03 * fenceopenstack (stonith:fence_openstack): Started node01"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/configuring-ha-cluster-resources-on-red-hat-openstack-platform_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
|
Chapter 3. Considerations for Red Hat Gluster Storage
|
Chapter 3. Considerations for Red Hat Gluster Storage 3.1. Firewall and Port Access Red Hat Gluster Storage requires access to a number of ports in order to work properly. Ensure that port access is available as indicated in Section 3.1.2, "Port Access Requirements" . 3.1.1. Configuring the Firewall Firewall configuration tools differ between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. For Red Hat Enterprise Linux 6, use the iptables command to open a port: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide For Red Hat Enterprise Linux 7, if default ports are not already in use by other services, it is usually simpler to add a service rather than open a port: However, if the default ports are already in use, you can open a specific port with the following command: For example: 3.1.2. Port Access Requirements Table 3.1. Open the following ports on all storage servers Connection source TCP Ports UDP Ports Recommended for Used for Any authorized network entity with a valid SSH key 22 - All configurations Remote backup using geo-replication Any authorized network entity; be cautious not to clash with other RPC services. 111 111 All configurations RPC port mapper and RPC bind Any authorized SMB/CIFS client 139 and 445 137 and 138 Sharing storage using SMB/CIFS SMB/CIFS protocol Any authorized NFS clients 2049 2049 Sharing storage using Gluster NFS or NFS-Ganesha Exports using NFS protocol All servers in the Samba-CTDB cluster 4379 - Sharing storage using SMB and Gluster NFS CTDB Any authorized network entity 24007 - All configurations Management processes using glusterd Any authorized network entity 55555 - All configurations Gluster events daemon If you are upgrading from a version of Red Hat Gluster Storage to the latest version 3.5.4, the port used for glusterevents daemon should be modified to be in the ephemral range. NFSv3 clients 662 662 Sharing storage using NFS-Ganesha and Gluster NFS statd NFSv3 clients 32803 32803 Sharing storage using NFS-Ganesha and Gluster NFS NLM protocol NFSv3 clients sending mount requests - 32769 Sharing storage using Gluster NFS Gluster NFS MOUNT protocol NFSv3 clients sending mount requests 20048 20048 Sharing storage using NFS-Ganesha NFS-Ganesha MOUNT protocol NFS clients 875 875 Sharing storage using NFS-Ganesha NFS-Ganesha RQUOTA protocol (fetching quota information) Servers in pacemaker/corosync cluster 2224 - Sharing storage using NFS-Ganesha pcsd Servers in pacemaker/corosync cluster 3121 - Sharing storage using NFS-Ganesha pacemaker_remote Servers in pacemaker/corosync cluster - 5404 and 5405 Sharing storage using NFS-Ganesha corosync Servers in pacemaker/corosync cluster 21064 - Sharing storage using NFS-Ganesha dlm Any authorized network entity 49152 - 49664 - All configurations Brick communication ports. The total number of ports required depends on the number of bricks on the node. One port is required for each brick on the machine. Gluster Clients 1023 or 49152 - Applicable when system ports are already being used in the machines. Communication between brick and client processes. Table 3.2. Open the following ports on NFS-Ganesha and Gluster NFS storage clients Connection source TCP Ports UDP Ports Recommended for Used for NFSv3 servers 662 662 Sharing storage using NFS-Ganesha and Gluster NFS statd NFSv3 servers 32803 32803 Sharing storage using NFS-Ganesha and Gluster NFS NLM protocol
|
[
"iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5667 -j ACCEPT service iptables save",
"firewall-cmd --zone= zone_name --add-service=glusterfs firewall-cmd --zone= zone_name --add-service=glusterfs --permanent",
"firewall-cmd --zone= zone_name --add-port= port / protocol firewall-cmd --zone= zone_name --add-port= port / protocol --permanent",
"firewall-cmd --zone=public --add-port=5667/tcp firewall-cmd --zone=public --add-port=5667/tcp --permanent"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-getting_started
|
Part III. The Red Hat Build of OptaPlanner solver
|
Part III. The Red Hat Build of OptaPlanner solver Solving a planning problem with OptaPlanner consists of the following steps: Model your planning problem as a class annotated with the @PlanningSolution annotation (for example, the NQueens class). Configure a Solver (for example a First Fit and Tabu Search solver for any NQueens instance). Load a problem data set from your data layer (for example a Four Queens instance). That is the planning problem. Solve it with Solver.solve(problem) , which returns the best solution found.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/assembly-planner-configuration
|
20.6. Starting, Resuming, and Restoring a Virtual Machine
|
20.6. Starting, Resuming, and Restoring a Virtual Machine 20.6.1. Starting a Guest Virtual Machine The virsh start domain ; [--console] [--paused] [--autodestroy] [--bypass-cache] [--force-boot] command starts an inactive virtual machine that was already defined but whose state is inactive since its last managed save state or a fresh boot. By default, if the domain was saved by the virsh managedsave command, the domain will be restored to its state. Otherwise, it will be freshly booted. The command can take the following arguments and the name of the virtual machine is required. --console - will attach the terminal running virsh to the domain's console device. This is runlevel 3. --paused - if this is supported by the driver, it will start the guest virtual machine in a paused state --autodestroy - the guest virtual machine is automatically destroyed when virsh disconnects --bypass-cache - used if the guest virtual machine is in the managedsave --force-boot - discards any managedsave options and causes a fresh boot to occur Example 20.3. How to start a virtual machine The following example starts the guest1 virtual machine that you already created and is currently in the inactive state. In addition, the command attaches the guest's console to the terminal running virsh: 20.6.2. Configuring a Virtual Machine to be Started Automatically at Boot The virsh autostart [--disable] domain command will automatically start the guest virtual machine when the host machine boots. Adding the --disable argument to this command disables autostart. The guest in this case will not start automatically when the host physical machine boots. Example 20.4. How to make a virtual machine start automatically when the host physical machine starts The following example sets the guest1 virtual machine which you already created to autostart when the host boots: # virsh autostart guest1 20.6.3. Rebooting a Guest Virtual Machine Reboot a guest virtual machine using the virsh reboot domain [--mode modename ] command. Remember that this action will only return once it has executed the reboot, so there may be a time lapse from that point until the guest virtual machine actually reboots. You can control the behavior of the rebooting guest virtual machine by modifying the on_reboot element in the guest virtual machine's XML configuration file. By default, the hypervisor attempts to select a suitable shutdown method automatically. To specify an alternative method, the --mode argument can specify a comma separated list which includes acpi and agent . The order in which drivers will try each mode is undefined, and not related to the order specified in virsh. For strict control over ordering, use a single mode at a time and repeat the command. Example 20.5. How to reboot a guest virtual machine The following example reboots a guest virtual machine named guest1 . In this example, the reboot uses the initctl method, but you can choose any mode that suits your needs. # virsh reboot guest1 --mode initctl 20.6.4. Restoring a Guest Virtual Machine The virsh restore <file> [--bypass-cache] [--xml /path/to/file ] [--running] [--paused] command restores a guest virtual machine previously saved with the virsh save command. See Section 20.7.1, "Saving a Guest Virtual Machine's Configuration" for information on the virsh save command. The restore action restarts the saved guest virtual machine, which may take some time. The guest virtual machine's name and UUID are preserved, but the ID will not necessarily match the ID that the virtual machine had when it was saved. The virsh restore command can take the following arguments: --bypass-cache - causes the restore to avoid the file system cache but note that using this flag may slow down the restore operation. --xml - this argument must be used with an XML file name. Although this argument is usually omitted, it can be used to supply an alternative XML file for use on a restored guest virtual machine with changes only in the host-specific portions of the domain XML. For example, it can be used to account for the file naming differences in underlying storage due to disk snapshots taken after the guest was saved. --running - overrides the state recorded in the save image to start the guest virtual machine as running. --paused - overrides the state recorded in the save image to start the guest virtual machine as paused. Example 20.6. How to restore a guest virtual machine The following example restores the guest virtual machine and its running configuration file guest1-config.xml : # virsh restore guest1-config.xml --running 20.6.5. Resuming a Guest Virtual Machine The virsh resume domain command restarts the CPUs of a domain that was suspended. This operation is immediate. The guest virtual machine resumes execution from the point it was suspended. Note that this action will not resume a guest virtual machine that has been undefined. This action will not resume transient virtual machines and will only work on persistent virtual machines. Example 20.7. How to restore a suspended guest virtual machine The following example restores the guest1 virtual machine: # virsh resume guest1
|
[
"virsh start guest1 --console Domain guest1 started Connected to domain guest1 Escape character is ^]"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Starting_suspending_resuming_saving_and_restoring_a_guest_virtual_machine-Starting_a_defined_domain
|
Chapter 5. RHEL for Real Time processes and threads
|
Chapter 5. RHEL for Real Time processes and threads The RHEL for Real Time key factors in operating systems are minimal interrupt latency and minimal thread switching latency. Although all programs use threads and processes, RHEL for Real Time handles them in a different way compared to the standard Red Hat Enterprise Linux. In real-time, using parallelism helps achieve greater efficiency in task execution and latency. Parallelism is when multiple tasks or several sub-tasks run at the same time using the multi-core infrastructure of CPU. 5.1. Processes A real-time process, in simplest terms, is a program in execution. The term process refers to an independent address space, potentially containing multiple threads. When the concept of more than one process running inside one address space was developed, Linux turned to a process structure that shares an address space with another process. This works well, as long as the process data structure is small. A UNIX(R)-style process construct contains: Address mappings for virtual memory. An execution context (PC, stack, registers). State and accounting information. In real-time, each process starts with a single thread, often called the parent thread. You can create additional threads from parent threads using the fork() system calls. fork() creates a new child process which is identical to the parent process except for the new process identifier. The child process runs independent of the creating process. The parent and child processes can be executed simultaneously. The difference between the fork() and exec() system calls is that, fork() starts a new process which is the copy of the parent process and exec() replaces the current process image with the new one. In real-time, the fork() system call, when successful, returns the process identifier of the child process and the parent process returns a non-zero value. On error, it returns an error number. 5.2. Threads In real-time, multiple threads can exist within a process. All threads of a process share its virtual address space and system resources. A thread is a schedulable entity that contains: A program counter (PC). A register context. A stack pointer. In real-time, following are potential mechanisms to create parallelism: Using the fork() and exec() function calls to create new processes. The fork() call creates an exact duplicate of a process from which it is called and has a unique process identifier. Using the Posix threads ( pthreads ) API to create new threads within an already running process. You must evaluate the component interaction level before forking real-time threads. Creating a new address space and running it as a new process is beneficial when the components are independent of one another or with less interaction. When components are required to share data or communicate frequently, running the threads within one address space is more efficient. In real-time, the fork() system call, when successful, returns a zero value. On error, it returns an error number. 5.3. Additional resources fork(2)`and `exec(2) man pages on your system
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/understanding_rhel_for_real_time/assembly_rhel-for-real-time-processes-and-threads_understanding-rhel-for-real-time-core-concepts
|
Storage
|
Storage OpenShift Dedicated 4 Configuring storage for OpenShift Dedicated clusters Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/storage/index
|
Chapter 3. Clair security scanner
|
Chapter 3. Clair security scanner 3.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping . 3.1.1. Information about Open Source Vulnerability (OSV) database for Clair Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software. OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems . Clair also reports vulnerability and security information for golang , java , and ruby ecosystems through the Open Source Vulnerability (OSV) database. By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects. For more information about OSV, see the OSV website . 3.2. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator installs or upgrades a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. 3.3. Testing Clair Use the following procedure to test Clair on either a standalone Red Hat Quay deployment, or on an OpenShift Container Platform Operator-based deployment. Prerequisites You have deployed the Clair container image. Procedure Pull a sample image by entering the following command: USD podman pull ubuntu:20.04 Tag the image to your registry by entering the following command: USD sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04 Push the image to your Red Hat Quay registry by entering the following command: USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04 Log in to your Red Hat Quay deployment through the UI. Click the repository name, for example, quayadmin/ubuntu . In the navigation pane, click Tags . Report summary Click the image report, for example, 45 medium , to show a more detailed report: Report details Note In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16 . This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug. 3.4. Advanced Clair configuration Use the procedures in the following sections to configure advanced Clair settings. 3.4.1. Unmanaged Clair configuration Red Hat Quay users can run an unmanaged Clair configuration with the Red Hat Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database. An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster. 3.4.1.1. Running a custom Clair configuration with an unmanaged Clair database Use the following procedure to set your Clair database to unmanaged. Important You must not use the same externally managed PostgreSQL database for both Red Hat Quay and Clair deployments. Your PostgreSQL database must also not be shared with other workloads, as it might exhaust the natural connection limit on the PostgreSQL side when connection-intensive workloads, like Red Hat Quay or Clair, contend for resources. Additionally, pgBouncer is not supported with Red Hat Quay or Clair, so it is not an option to resolve this issue. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: false : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false 3.4.1.2. Configuring a custom Clair database with an unmanaged Clair database Red Hat Quay on OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Note The following procedure sets up Clair with SSL/TLS certifications. To view a similar procedure that does not set up Clair with SSL/TLS certifications, see "Configuring a custom Clair database with a managed Clair configuration". Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output 3.4.2. Running a custom Clair configuration with a managed Clair database In some cases, users might want to run a custom Clair configuration with a managed Clair database. This is useful in the following scenarios: When a user wants to disable specific updater resources. When a user is running Red Hat Quay in an disconnected environment. For more information about running Clair in a disconnected environment, see Clair in disconnected environments . Note If you are running Red Hat Quay in an disconnected environment, the airgap parameter of your clair-config.yaml must be set to true . If you are running Red Hat Quay in an disconnected environment, you should disable all updater components. 3.4.2.1. Setting a Clair database to managed Use the following procedure to set your Clair database to managed. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: true : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true 3.4.2.2. Configuring a custom Clair database with a managed Clair configuration Red Hat Quay on OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output 3.4.3. Clair in disconnected environments Note Currently, deploying Clair in disconnected environments is not supported on IBM Power and IBM Z. Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use. However, some users might require Red Hat Quay to run in a disconnected environment, or an environment without direct access to the internet. Clair supports disconnected environments by working with different types of update workflows that take network isolation into consideration. This works by using the clairctl command line interface tool, which obtains updater data from the internet by using an open host, securely transferring the data to an isolated host, and then important the updater data on the isolated host into Clair. Use this guide to deploy Clair in a disconnected environment. Note Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments. For more information about Clair updaters, see "Clair updaters". 3.4.3.1. Setting up Clair in a disconnected OpenShift Container Platform cluster Use the following procedures to set up an OpenShift Container Platform provisioned Clair pod in a disconnected OpenShift Container Platform cluster. 3.4.3.1.1. Installing the clairctl command line utility tool for OpenShift Container Platform deployments Use the following procedure to install the clairctl CLI tool for OpenShift Container Platform deployments. Procedure Install the clairctl program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command: USD oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl Note Unofficially, the clairctl tool can be downloaded Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 3.4.3.1.2. Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform Use the following procedure to retrieve and decode the configuration secret for an OpenShift Container Platform provisioned Clair instance on OpenShift Container Platform. Prerequisites You have installed the clairctl command line utility tool. Procedure Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML: USD oc get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={USD.data['config\.yaml']}" | base64 -d > clair-config.yaml Update the clair-config.yaml file so that the disable_updaters and airgap parameters are set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- 3.4.3.1.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 3.4.3.1.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 3.4.3.1.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 3.4.3.2. Setting up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster Use the following procedures to set up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster. 3.4.3.2.1. Installing the clairctl command line utility tool for a self-managed Clair deployment on OpenShift Container Platform Use the following procedure to install the clairctl CLI tool for self-managed Clair deployments on OpenShift Container Platform. Procedure Install the clairctl program for a self-managed Clair deployment by using the podman cp command, for example: USD sudo podman cp clairv4:/usr/bin/clairctl ./clairctl Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 3.4.3.2.2. Deploying a self-managed Clair container for disconnected OpenShift Container Platform clusters Use the following procedure to deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters. Prerequisites You have installed the clairctl command line utility tool. Procedure Create a folder for your Clair configuration file, for example: USD mkdir /etc/clairv4/config/ Create a Clair configuration file with the disable_updaters parameter set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- Start Clair by using the container image, mounting in the configuration from the file you created: 3.4.3.2.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 3.4.3.2.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 3.4.3.2.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 3.4.4. Mapping repositories to Common Product Enumeration information Note Currently, mapping repositories to Common Product Enumeration information is not supported on IBM Power and IBM Z. Clair's Red Hat Enterprise Linux (RHEL) scanner relies on a Common Product Enumeration (CPE) file to map RPM packages to the corresponding security data to produce matching results. These files are owned by product security and updated daily. The CPE file must be present, or access to the file must be allowed, for the scanner to properly process RPM packages. If the file is not present, RPM packages installed in the container image will not be scanned. Table 3.1. Clair CPE mapping files CPE Link to JSON mapping file repos2cpe Red Hat Repository-to-CPE JSON names2repos Red Hat Name-to-Repos JSON . In addition to uploading CVE information to the database for disconnected Clair installations, you must also make the mapping file available locally: For standalone Red Hat Quay and Clair deployments, the mapping file must be loaded into the Clair pod. For Red Hat Quay on OpenShift Container Platform deployments, you must set the Clair component to unmanaged . Then, Clair must be deployed manually, setting the configuration to load a local copy of the mapping file. 3.4.4.1. Mapping repositories to Common Product Enumeration example configuration Use the repo2cpe_mapping_file and name2repos_mapping_file fields in your Clair configuration to include the CPE JSON mapping files. For example: indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json For more information, see How to accurately match OVAL security data to installed RPMs .
|
[
"podman pull ubuntu:20.04",
"sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false",
"oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret",
"indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true",
"apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true",
"oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret",
"indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true",
"apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s",
"oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl",
"chmod u+x ./clairctl",
"oc get secret -n quay-enterprise example-registry-clair-config-secret -o \"jsonpath={USD.data['config\\.yaml']}\" | base64 -d > clair-config.yaml",
"--- indexer: airgap: true --- matcher: disable_updaters: true ---",
"./clairctl --config ./config.yaml export-updaters updates.gz",
"oc get svc -n quay-enterprise",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h",
"oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432",
"indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json",
"./clairctl --config ./clair-config.yaml import-updaters updates.gz",
"sudo podman cp clairv4:/usr/bin/clairctl ./clairctl",
"chmod u+x ./clairctl",
"mkdir /etc/clairv4/config/",
"--- indexer: airgap: true --- matcher: disable_updaters: true ---",
"sudo podman run -it --rm --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.13.3",
"./clairctl --config ./config.yaml export-updaters updates.gz",
"oc get svc -n quay-enterprise",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h",
"oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432",
"indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json",
"./clairctl --config ./clair-config.yaml import-updaters updates.gz",
"indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_operator_features/clair-vulnerability-scanner
|
Chapter 5. LVM Configuration Examples
|
Chapter 5. LVM Configuration Examples This chapter provides some basic LVM configuration examples. 5.1. Creating an LVM Logical Volume on Three Disks This example procedure creates an LVM logical volume called new_logical_volume that consists of the disks at /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . To use disks in a volume group, label them as LVM physical volumes with the pvcreate command. Warning This command destroys any data on /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . Create the a volume group that consists of the LVM physical volumes you have created. The following command creates the volume group new_vol_group . You can use the vgs command to display the attributes of the new volume group. Create the logical volume from the volume group you have created. The following command creates the logical volume new_logical_volume from the volume group new_vol_group . This example creates a logical volume that uses 2 gigabytes of the volume group. Create a file system on the logical volume. The following command creates a GFS2 file system on the logical volume. The following commands mount the logical volume and report the file system disk space usage.
|
[
"pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1 Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sdb1\" successfully created Physical volume \"/dev/sdc1\" successfully created",
"vgcreate new_vol_group /dev/sda1 /dev/sdb1 /dev/sdc1 Volume group \"new_vol_group\" successfully created",
"vgs VG #PV #LV #SN Attr VSize VFree new_vol_group 3 0 0 wz--n- 51.45G 51.45G",
"lvcreate -L 2G -n new_logical_volume new_vol_group Logical volume \"new_logical_volume\" created",
"mkfs.gfs2 -p lock_nolock -j 1 /dev/new_vol_group/new_logical_volume This will destroy any data on /dev/new_vol_group/new_logical_volume. Are you sure you want to proceed? [y/n] y Device: /dev/new_vol_group/new_logical_volume Blocksize: 4096 Filesystem Size: 491460 Journals: 1 Resource Groups: 8 Locking Protocol: lock_nolock Lock Table: Syncing All Done",
"mount /dev/new_vol_group/new_logical_volume /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/new_vol_group/new_logical_volume 1965840 20 1965820 1% /mnt"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/lvm_examples
|
Chapter 1. Operators overview
|
Chapter 1. Operators overview Operators are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions. Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications. With Operators, you can create applications to monitor the running services in the cluster. Operators are designed specifically for your applications. Operators implement and automate the common Day 1 operations such as installation and configuration as well as Day 2 operations such as autoscaling up and down and creating backups. All these activities are in a piece of software running inside your cluster. 1.1. For developers As a developer, you can perform the following Operator tasks: Install Operator SDK CLI . Create Go-based Operators , Ansible-based Operators , and Helm-based Operators . Use Operator SDK to build,test, and deploy an Operator . Install and subscribe an Operator to your namespace . Create an application from an installed Operator through the web console . 1.2. For administrators As a cluster administrator, you can perform the following Operator tasks: Manage custom catalogs Allow non-cluster administrators to install Operators Install an Operator from OperatorHub View Operator status . Manage Operator conditions Upgrade installed Operators Delete installed Operators Configure proxy support Use Operator Lifecycle Manager on restricted networks To know all about the cluster Operators that Red Hat provides, see Cluster Operators reference . 1.3. steps To understand more about Operators, see What are Operators?
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/operators/operators-overview
|
Release Notes for Red Hat build of Apache Camel K 1.10.5
|
Release Notes for Red Hat build of Apache Camel K 1.10.5 Red Hat build of Apache Camel K 1.10.5 What's new in Red Hat build of Apache Camel K Red Hat build of Apache Camel K Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/release_notes_for_red_hat_build_of_apache_camel_k_1.10.5/index
|
1.10.4. VIRTUAL SERVERS
|
1.10.4. VIRTUAL SERVERS The VIRTUAL SERVERS panel displays information for each currently defined virtual server. Each table entry shows the status of the virtual server, the server name, the virtual IP assigned to the server, the netmask of the virtual IP, the port number to which the service communicates, the protocol used, and the virtual device interface. Figure 1.34. The VIRTUAL SERVERS Panel Each server displayed in the VIRTUAL SERVERS panel can be configured on subsequent screens or subsections . To add a service, click the ADD button. To remove a service, select it by clicking the radio button to the virtual server and click the DELETE button. To enable or disable a virtual server in the table click its radio button and click the (DE)ACTIVATE button. After adding a virtual server, you can configure it by clicking the radio button to its left and clicking the EDIT button to display the VIRTUAL SERVER subsection. 1.10.4.1. The VIRTUAL SERVER Subsection The VIRTUAL SERVER subsection panel shown in Figure 1.35, "The VIRTUAL SERVERS Subsection" allows you to configure an individual virtual server. Links to subsections related specifically to this virtual server are located along the top of the page. But before configuring any of the subsections related to this virtual server, complete this page and click on the ACCEPT button. Figure 1.35. The VIRTUAL SERVERS Subsection Name A descriptive name to identify the virtual server. This name is not the hostname for the machine, so make it descriptive and easily identifiable. You can even reference the protocol used by the virtual server, such as HTTP. Application port The port number through which the service application will listen. Protocol Provides a choice of UDP or TCP, in a drop-down menu. Virtual IP Address The virtual server's floating IP address. Virtual IP Network Mask The netmask for this virtual server, in the drop-down menu. Firewall Mark For entering a firewall mark integer value when bundling multi-port protocols or creating a multi-port virtual server for separate, but related protocols. Device The name of the network device to which you want the floating IP address defined in the Virtual IP Address field to bind. You should alias the public floating IP address to the Ethernet interface connected to the public network. Re-entry Time An integer value that defines the number of seconds before the active LVS router attempts to use a real server after the real server failed. Service Timeout An integer value that defines the number of seconds before a real server is considered dead and not available. Quiesce server When the Quiesce server radio button is selected, anytime a new real server node comes online, the least-connections table is reset to zero so the active LVS router routes requests as if all the real servers were freshly added to the cluster. This option prevents the a new server from becoming bogged down with a high number of connections upon entering the cluster. Load monitoring tool The LVS router can monitor the load on the various real servers by using either rup or ruptime . If you select rup from the drop-down menu, each real server must run the rstatd service. If you select ruptime , each real server must run the rwhod service. Scheduling The preferred scheduling algorithm from the drop-down menu. The default is Weighted least-connection . Persistence Used if you need persistent connections to the virtual server during client transactions. Specifies the number of seconds of inactivity allowed to lapse before a connection times out in this text field. Persistence Network Mask To limit persistence to particular subnet, select the appropriate network mask from the drop-down menu.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-piranha-virtservs-cso
|
Migrating Apache Camel
|
Migrating Apache Camel Red Hat build of Apache Camel 4.8 Migrating Apache Camel Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team https://access.redhat.com/support
|
[
"<circuitBreaker> <resilience4jConfiguration> <timeoutEnabled>true</timeoutEnabled> <timeoutDuration>2000</timeoutDuration> </resilience4jConfiguration> </circuitBreaker>",
"<circuitBreaker> <resilience4jConfiguration timeoutEnabled=\"true\" timeoutDuration=\"2000\"/> </circuitBreaker>",
"<route id=\"myRoute\" description=\"Something that this route do\"> <from uri=\"kafka:cheese\"/> </route>",
"camel.health.producers-enabled = true",
"- route: from: uri: \"direct:info\" steps: - log: \"message\"",
"\"bean:myBean?method=foo(com.foo.MyOrder.class, true)\"",
"\"bean:myBean?method=bar(String.class, int.class)\"",
"from(\"optaplanner:myProblemName\") .to(\"...\")",
"from(\"optaplanner:myProblemName?configFile=PATH/TO/CONFIG.FILE.xml\") .to(\"...\")",
"from(\"platform-http:myservice\") .to(\"...\")",
"<dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.3.0.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.3.2</version> </dependency>",
"telegram:bots/myTokenHere",
"telegram:bots?authorizationToken=myTokenHere",
"ManagedCamelContext managed = camelContext.getExtension(ManagedCamelContext.class);"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html-single/migrating_apache_camel/index
|
Chapter 3. Installing the Migration Toolkit for Containers
|
Chapter 3. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 3.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 3.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 3.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your OpenShift Container Platform source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi8 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 3.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.12 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.12 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 3.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.12, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 3.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 3.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 3.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 3.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 3.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 3.4.2.1. NetworkPolicy configuration 3.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 3.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 3.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 3.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 3.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 3.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 3.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 3.4.4. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 3.4.4.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 3.4.4.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 3.4.4.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 3.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider. MTC supports the following storage providers: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 3.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 3.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint in order to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Data Foundation. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 3.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 3.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 3.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` Create a service principal with the Contributor role, assigning a specific --role and --scopes : USD AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP` The CLI generates a password for you. Ensure you capture the password. After creating the service principal, obtain the client id. USD AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>` Note For this to be successful, you must know your Azure application ID. Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 3.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 3.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
|
[
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi8 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`",
"AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`",
"AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migration_toolkit_for_containers/installing-mtc
|
Chapter 18. Managing cloud provider credentials
|
Chapter 18. Managing cloud provider credentials 18.1. About the Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). The CCO syncs on CredentialsRequest custom resources (CRs) to allow OpenShift Container Platform components to request cloud provider credentials with the specific permissions that are required for the cluster to run. By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in several different modes. If no mode is specified, or the credentialsMode parameter is set to an empty string ( "" ), the CCO operates in its default mode. 18.1.1. Modes By setting different values for the credentialsMode parameter in the install-config.yaml file, the CCO can be configured to operate in mint , passthrough , or manual mode. These options provide transparency and flexibility in how the CCO uses cloud credentials to process CredentialsRequest CRs in the cluster, and allow the CCO to be configured to suit the security requirements of your organization. Not all CCO modes are supported for all cloud providers. Mint : In mint mode, the CCO uses the provided admin-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. Note Mint mode is the default and recommended best practice setting for the CCO to use. Passthrough : In passthrough mode, the CCO passes the provided cloud credential to the components that request cloud credentials. Manual : In manual mode, a user manages cloud credentials instead of the CCO. Manual with AWS STS : In manual mode, you can configure an AWS cluster to use Amazon Web Services Secure Token Service (AWS STS). With this configuration, the CCO uses temporary credentials for different components. Important Support for Amazon Web Services Secure Token Service (AWS STS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Table 18.1. CCO mode support matrix Cloud provider Mint Passthrough Manual Amazon Web Services (AWS) X X X Microsoft Azure X X X Google Cloud Platform (GCP) X X X Red Hat OpenStack Platform (RHOSP) X Red Hat Virtualization (RHV) X VMware vSphere X 18.1.2. Default behavior For platforms on which multiple modes are supported (AWS, Azure, and GCP), when the CCO operates in its default mode, it checks the provided credentials dynamically to determine for which mode they are sufficient to process CredentialsRequest CRs. By default, the CCO determines whether the credentials are sufficient for mint mode, which is the preferred mode of operation, and uses those credentials to create appropriate credentials for components in the cluster. If the credentials are not sufficient for mint mode, it determines whether they are sufficient for passthrough mode. If the credentials are not sufficient for passthrough mode, the CCO cannot adequately process CredentialsRequest CRs. Note The CCO cannot verify whether Azure credentials are sufficient for passthrough mode. If Azure credentials are insufficient for mint mode, the CCO operates with the assumption that the credentials are sufficient for passthrough mode. If the provided credentials are determined to be insufficient during installation, the installation fails. For AWS, the installer fails early in the process and indicates which required permissions are missing. Other providers might not provide specific information about the cause of the error until errors are encountered. If the credentials are changed after a successful installation and the CCO determines that the new credentials are insufficient, the CCO puts conditions on any new CredentialsRequest CRs to indicate that it cannot process them because of the insufficient credentials. To resolve insufficient credentials issues, provide a credential with sufficient permissions. If an error occurred during installation, try installing again. For issues with new CredentialsRequest CRs, wait for the CCO to try to process the CR again. As an alternative, you can manually create IAM for AWS , Azure , and GCP . 18.1.3. Additional resources Cluster Operators reference page for the Cloud Credential Operator 18.2. Using mint mode Mint mode is supported for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. If the credential is not removed after installation, it is stored and used by the CCO to process CredentialsRequest CRs for components in the cluster and create new credentials for each with only the specific permissions that are required. The continuous reconciliation of cloud credentials in mint mode allows actions that require additional credentials or permissions, such as upgrading, to proceed. If the requirement that mint mode stores the administrator-level credential in the cluster kube-system namespace does not suit the security requirements of your organization, see Alternatives to storing administrator-level secrets in the kube-system project for AWS , Azure , or GCP . 18.2.1. Mint mode permissions requirements When using the CCO in mint mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials are not sufficient for mint mode, the CCO cannot create an IAM user. 18.2.1.1. Amazon Web Services (AWS) permissions The credential you provide for mint mode in AWS must have the following permissions: iam:CreateAccessKey iam:CreateUser iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUser iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser iam:SimulatePrincipalPolicy 18.2.1.2. Microsoft Azure permissions The credential you provide for mint mode in Azure must have a service principal with the permissions specified in Creating a service principal . 18.2.1.3. Google Cloud Platform (GCP) permissions The credential you provide for mint mode in GCP must have the following permissions: resourcemanager.projects.get serviceusage.services.list iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.roles.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy 18.2.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> 18.2.3. Mint mode with removal or rotation of the administrator-level credential Currently, this mode is only supported on AWS and GCP. In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation. The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. The administrator-level credential is not stored in the cluster permanently. Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade. 18.2.3.1. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Note When rotating the credentials for an Azure cluster that is using mint mode, do not delete or replace the service principal that was used during installation. Instead, generate new Azure service principal client secrets and update the OpenShift Container Platform secrets accordingly. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } ... Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields are different than the previously recorded information. 18.2.3.2. Removing cloud provider credentials After installing an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The administrator-level credential is required only during changes that require its elevated permissions, such as upgrades. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . 18.2.4. Additional resources Alternatives to storing administrator-level secrets in the kube-system project for AWS Alternatives to storing administrator-level secrets in the kube-system project for Azure Alternatives to storing administrator-level secrets in the kube-system project for GCP Creating a service principal in Azure 18.3. Using passthrough mode Passthrough mode is supported for Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere. In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. The credential must have permissions to perform the installation and complete the operations that are required by components in the cluster, but does not need to be able to create new credentials. The CCO does not attempt to create additional limited-scoped credentials in passthrough mode. 18.3.1. Passthrough mode permissions requirements When using the CCO in passthrough mode, ensure that the credential you provide meets the requirements of the cloud on which you are running or installing OpenShift Container Platform. If the provided credentials the CCO passes to a component that creates a CredentialsRequest CR are not sufficient, that component will report an error when it tries to call an API that it does not have permissions for. 18.3.1.1. Amazon Web Services (AWS) permissions The credential you provide for passthrough mode in AWS must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for AWS . 18.3.1.2. Microsoft Azure permissions The credential you provide for passthrough mode in Azure must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for Azure . 18.3.1.3. Google Cloud Platform (GCP) permissions The credential you provide for passthrough mode in GCP must have all the requested permissions for all CredentialsRequest CRs that are required by the version of OpenShift Container Platform you are running or installing. To locate the CredentialsRequest CRs that are required, see Manually creating IAM for GCP . 18.3.1.4. Red Hat OpenStack Platform (RHOSP) permissions To install an OpenShift Container Platform cluster on RHOSP, the CCO requires a credential with the permissions of a member user role. 18.3.1.5. Red Hat Virtualization (RHV) permissions To install an OpenShift Container Platform cluster on RHV, the CCO requires a credential with the following privileges: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the specific cluster that is targeted for OpenShift Container Platform deployment 18.3.1.6. VMware vSphere permissions To install an OpenShift Container Platform cluster on VMware vSphere, the CCO requires a credential with the following vSphere privileges: Table 18.2. Required vSphere privileges Category Privileges Datastore Allocate space Folder Create folder , Delete folder vSphere Tagging All privileges Network Assign network Resource Assign virtual machine to resource pool Profile-driven storage All privileges vApp All privileges Virtual machine All privileges 18.3.2. Admin credentials root secret format Each cloud provider uses a credentials root secret in the kube-system namespace by convention, which is then used to satisfy all credentials requests and create their respective secrets. This is done either by minting new credentials with mint mode , or by copying the credentials root secret with passthrough mode . The format for the secret varies by cloud, and is also used for each CredentialsRequest secret. Amazon Web Services (AWS) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key> Microsoft Azure secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region> On Microsoft Azure, the credentials secret format includes two properties that must contain the cluster's infrastructure ID, generated randomly for each cluster installation. This value can be found after running create manifests: USD cat .openshift_install_state.json | jq '."*installconfig.ClusterID".InfraID' -r Example output mycluster-2mpcn This value would be used in the secret data as follows: azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg Google Cloud Platform (GCP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account> Red Hat OpenStack Platform (RHOSP) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init> Red Hat Virtualization (RHV) secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: ovirt-credentials data: ovirt_url: <base64-encoded_url> ovirt_username: <base64-encoded_username> ovirt_password: <base64-encoded_password> ovirt_insecure: <base64-encoded_insecure> ovirt_ca_bundle: <base64-encoded_ca_bundle> VMware vSphere secret format apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password> 18.3.3. Passthrough mode credential maintenance If CredentialsRequest CRs change over time as the cluster is upgraded, you must manually update the passthrough mode credential to meet the requirements. To avoid credentials issues during an upgrade, check the CredentialsRequest CRs in the release image for the new version of OpenShift Container Platform before upgrading. To locate the CredentialsRequest CRs that are required for your cloud provider, see Manually creating IAM for AWS , Azure , or GCP . 18.3.3.1. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Note When rotating the credentials for an Azure cluster that is using mint mode, do not delete or replace the service principal that was used during installation. Instead, generate new Azure service principal client secrets and update the OpenShift Container Platform secrets accordingly. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials RHV ovirt-credentials vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec Azure: AzureProviderSpec GCP: GCPProviderSpec RHOSP: OpenStackProviderSpec RHV: OvirtProviderSpec vSphere: VSphereProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } ... Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields are different than the previously recorded information. 18.3.4. Reducing permissions after installation When using passthrough mode, each component has the same permissions used by all other components. If you do not reduce the permissions after installing, all components have the broad permissions that are required to run the installer. After installation, you can reduce the permissions on your credential to only those that are required to run the cluster, as defined by the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are using. To locate the CredentialsRequest CRs that are required for AWS, Azure, or GCP and learn how to change the permissions the CCO uses, see Manually creating IAM for AWS , Azure , or GCP . 18.3.5. Additional resources Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for GCP 18.4. Using manual mode Manual mode is supported for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). To use this mode, you must examine the CredentialsRequest CRs in the release image for the version of OpenShift Container Platform that you are running or installing, create corresponding credentials in the underlying cloud provider, and create Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequest CRs for the cluster's cloud provider. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. This mode also does not require connectivity to the AWS public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. For information about configuring your cloud provider to use manual mode, see Manually creating IAM for AWS , Azure , or GCP . 18.4.1. Upgrading clusters with manually maintained credentials If credentials are added in a future release, the Cloud Credential Operator (CCO) upgradable status for a cluster with manually maintained credentials changes to false . For minor release, for example, from 4.6 to 4.7, this status prevents you from upgrading until you have addressed any updated permissions. For z-stream releases, for example, from 4.6.10 to 4.6.11, the upgrade is not blocked, but the credentials must still be updated for the new release. Use the Administrator perspective of the web console to determine if the CCO is upgradeable. Navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , examine the CredentialsRequest custom resource for the new release and update the manually maintained credentials on your cluster to match before upgrading. In addition to creating new credentials for the release image that you are upgrading to, you must review the required permissions for existing credentials and accommodate any new permissions requirements for existing components in the new release. The CCO cannot detect these mismatches and will not set upgradable to false in this case. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. 18.4.2. Manual mode with AWS STS You can configure an AWS cluster in manual mode to use Amazon Web Services Secure Token Service (AWS STS) . With this configuration, the CCO uses temporary credentials for different components. Important Support for Amazon Web Services Secure Token Service (AWS STS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 18.4.3. Additional resources Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for GCP Using manual mode with AWS STS 18.5. Using manual mode with STS Manual mode with STS is available as a Technology Preview for Amazon Web Services (AWS). Important Support for AWS Secure Token Service (STS) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Note This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature. In manual mode with STS, the individual OpenShift Container Platform cluster components use AWS Secure Token Service (STS) to assign components IAM roles that provide short-term, limited-privilege security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls. Requests for new and refreshed credentials are automated by using an appropriately configured AWS IAM OpenID Connect (OIDC) identity provider, combined with AWS IAM roles. OpenShift Container Platform signs service account tokens that are trusted by AWS IAM, and can be projected into a pod and used for authentication. Tokens are refreshed after one hour. Figure 18.1. STS authentication flow Using manual mode with STS changes the content of the AWS credentials that are provided to individual OpenShift Container Platform components. AWS secret format using long-lived credentials apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 data: aws_access_key_id: <base64-encoded-access-key-id> aws_secret_access_key: <base64-encoded-secret-access-key> 1 The namespace for the component. 2 The name of the component secret. AWS secret format with STS apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 stringData: credentials: |- [default] role_name: <operator-role-name> 3 web_identity_token_file: <path-to-token> 4 1 The namespace for the component. 2 The name of the component secret. 3 The IAM role for the component. 4 The path to the service account token inside the pod. By convention, this is /var/run/secrets/openshift/serviceaccount/token for OpenShift Container Platform components. 18.5.1. Installing an OpenShift Container Platform cluster configured for manual mode with STS To install a cluster that is configured to use the CCO in manual mode with STS in OpenShift Container Platform version 4.7: Create the required AWS resources Run the OpenShift Container Platform installer Verify that the cluster is using short-lived credentials 18.5.1.1. Creating AWS resources manually To install an OpenShift Container Platform cluster that is configured to use the CCO in manual mode with STS, you must first manually create the required AWS resources. Procedure Generate a private key to sign the ServiceAccount object: USD openssl genrsa -out sa-signer 4096 Generate a ServiceAccount object public key: USD openssl rsa -in sa-signer -pubout -out sa-signer.pub Create an S3 bucket to hold the OIDC configuration: USD aws s3api create-bucket --bucket <oidc_bucket_name> --region <aws_region> --create-bucket-configuration LocationConstraint=<aws_region> Note If the value of <aws_region> is us-east-1 , do not specify the LocationConstraint parameter. Retain the S3 bucket URL: OPENID_BUCKET_URL="https://<oidc_bucket_name>.s3.<aws_region>.amazonaws.com" Build an OIDC configuration: Create a file named keys.json that contains the following information: { "keys": [ { "use": "sig", "kty": "RSA", "kid": "<public_signing_key_id>", "alg": "RS256", "n": "<public_signing_key_modulus>", "e": "<public_signing_key_exponent>" } ] } Where: <public_signing_key_id> is generated from the public key with: USD openssl rsa -in sa-signer.pub -pubin --outform DER | openssl dgst -binary -sha256 | openssl base64 | tr '/+' '_-' | tr -d '=' This command converts the public key to DER format, performs a SHA-256 checksum on the binary representation, encodes the data with base64 encoding, and then changes the base64-encoded output to base64URL encoding. <public_signing_key_modulus> is generated from the public key with: USD openssl rsa -pubin -in sa-signer.pub -modulus -noout | sed -e 's/Modulus=//' | xxd -r -p | base64 -w0 | tr '/+' '_-' | tr -d '=' This command prints the modulus of the public key, extracts the hex representation of the modulus, converts the ASCII hex to binary, encodes the data with base64 encoding, and then changes the base64-encoded output to base64URL encoding. <public_signing_key_exponent> is generated from the public key with: USD printf "%016x" USD(openssl rsa -pubin -in sa-signer.pub -noout -text | grep Exponent | awk '{ print USD2 }') | awk '{ sub(/(00)+/, "", USD1); print USD1 }' | xxd -r -p | base64 -w0 | tr '/+' '_-' | tr -d '=' This command extracts the decimal representation of the public key exponent, prints it as hex with a padded 0 if needed, removes leading 00 pairs, converts the ASCII hex to binary, encodes the data with base64 encoding, and then changes the base64-encoded output to use only characters that can be used in a URL. Create a file named openid-configuration that contains the following information: { "issuer": "USDOPENID_BUCKET_URL", "jwks_uri": "USD{OPENID_BUCKET_URL}/keys.json", "response_types_supported": [ "id_token" ], "subject_types_supported": [ "public" ], "id_token_signing_alg_values_supported": [ "RS256" ], "claims_supported": [ "aud", "exp", "sub", "iat", "iss", "sub" ] } Upload the OIDC configuration: USD aws s3api put-object --bucket <oidc_bucket_name> --key keys.json --body ./keys.json USD aws s3api put-object --bucket <oidc_bucket_name> --key '.well-known/openid-configuration' --body ./openid-configuration Where <oidc_bucket_name> is the S3 bucket that was created to hold the OIDC configuration. Allow the AWS IAM OpenID Connect (OIDC) identity provider to read these files: USD aws s3api put-object-acl --bucket <oidc_bucket_name> --key keys.json --acl public-read USD aws s3api put-object-acl --bucket <oidc_bucket_name> --key '.well-known/openid-configuration' --acl public-read Create an AWS IAM OIDC identity provider: Get the certificate chain from the server that hosts the OIDC configuration: USD echo | openssl s_client -servername USD<oidc_bucket_name>.s3.USD<aws_region>.amazonaws.com -connect USD<oidc_bucket_name>.s3.USD<aws_region>.amazonaws.com:443 -showcerts 2>/dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".pem"; print >out}' Calculate the fingerprint for the certificate at the root of the chain: USD export BUCKET_FINGERPRINT=USD(openssl x509 -in cert<number>.pem -fingerprint -noout | sed -e 's/.*Fingerprint=//' -e 's/://g') Where <number> is the highest number in the files that were saved. For example, if 2 is the highest number in the files that were saved, use cert2.pem . Create the identity provider: USD aws iam create-open-id-connect-provider --url USDOPENID_BUCKET_URL --thumbprint-list USDBUCKET_FINGERPRINT --client-id-list openshift sts.amazonaws.com Retain the returned ARN of the newly created identity provider. This ARN is later referred to as <aws_iam_openid_arn> . Generate IAM roles: Locate all CredentialsRequest CRs in this release image that target the cloud you are deploying on: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.<y>.<z>-x86_64 --credentials-requests --cloud=aws Where <y> and <z> are the numbers corresponding to the version of OpenShift Container Platform you are installing. For each CredentialsRequest CR, create an IAM role of type Web identity using the previously created IAM Identity Provider that grants the necessary permissions and establishes a trust relationship that trusts the identity provider previously created. For example, for the openshift-machine-api-operator CredentialsRequest CR in 0000_30_machine-api-operator_00_credentials-request.yaml , create an IAM role that allows an identity from the created OIDC provider created for the cluster, similar to the following: { "Role": { "Path": "/", "RoleName": "openshift-machine-api-aws-cloud-credentials", "RoleId": "ARSOMEROLEID", "Arn": "arn:aws:iam::123456789012:role/openshift-machine-api-aws-cloud-credentials", "CreateDate": "2021-01-06T15:54:13Z", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "<aws_iam_openid_arn>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com/USDBUCKET_NAME:aud": "openshift" } } } ] }, "Description": "OpenShift role for openshift-machine-api/aws-cloud-credentials", "MaxSessionDuration": 3600, "RoleLastUsed": { "LastUsedDate": "2021-02-03T02:51:24Z", "Region": "<aws_region>" } } } Where <aws_iam_openid_arn> is the returned ARN of the newly created identity provider. To further restrict the role such that only specific cluster ServiceAccount objects can assume the role, modify the trust relationship of each role by updating the .Role.AssumeRolePolicyDocument.Statement[].Condition field to the specific ServiceAccount objects for each component. Modify the trust relationship of the cluster-image-registry-operator role to have the following condition: "Condition": { "StringEquals": { "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub": [ "system:serviceaccount:openshift-image-registry:registry", "system:serviceaccount:openshift-image-registry:cluster-image-registry-operator" ] } } Modify the trust relationship of the openshift-ingress-operator to have the following condition: "Condition": { "StringEquals": { "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub": [ "system:serviceaccount:openshift-ingress-operator:ingress-operator" ] } } Modify the trust relationship of the openshift-cluster-csi-drivers to have the following condition: "Condition": { "StringEquals": { "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub": [ "system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator", "system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-controller-sa" ] } } Modify the trust relationship of the openshift-machine-api to have the following condition: "Condition": { "StringEquals": { "<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub": [ "system:serviceaccount:openshift-machine-api:machine-api-controllers" ] } } For each IAM role, attach an IAM policy to the role that reflects the required permissions from the corresponding CredentialsRequest objects. For example, for openshift-machine-api , attach an IAM policy similar to the following: { "RoleName": "openshift-machine-api-aws-cloud-credentials", "PolicyName": "openshift-machine-api-aws-cloud-credentials", "PolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:CreateTags", "ec2:DescribeAvailabilityZones", "ec2:DescribeDhcpOptions", "ec2:DescribeImages", "ec2:DescribeInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVpcs", "ec2:RunInstances", "ec2:TerminateInstances", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:RegisterTargets", "iam:PassRole", "iam:CreateServiceLinkedRole" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:Encrypt", "kms:GenerateDataKey", "kms:GenerateDataKeyWithoutPlainText", "kms:DescribeKey" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "kms:RevokeGrant", "kms:CreateGrant", "kms:ListGrants" ], "Resource": "*", "Condition": { "Bool": { "kms:GrantIsForAWSResource": true } } } ] } } Prepare to run the OpenShift Container Platform installer: Create the install-config.yaml file: USD ./openshift-install create install-config Configure the cluster to install with the CCO in manual mode: USD echo "credentialsMode: Manual" >> install-config.yaml Create install manifests: USD ./openshift-install create manifests Create a tls directory, and copy the private key generated previously there: Note The target file name must be ./tls/bound-service-account-signing-key.key . USD mkdir tls ; cp <path_to_service_account_signer> ./tls/bound-service-account-signing-key.key Create a custom Authentication CR with the file name cluster-authentication-02-config.yaml : USD cat << EOF > manifests/cluster-authentication-02-config.yaml apiVersion: config.openshift.io/v1 kind: Authentication metadata: name: cluster spec: serviceAccountIssuer: USDOPENID_BUCKET_URL EOF For each CredentialsRequest CR that is extracted from the release image, create a secret with the target namespace and target name that is indicated in each CredentialsRequest , substituting the AWS IAM role ARN created previously for each component: Example secret manifest for openshift-machine-api : USD cat manifests/openshift-machine-api-aws-cloud-credentials-credentials.yaml apiVersion: v1 stringData: credentials: |- [default] role_arn = arn:aws:iam::123456789012:role/openshift-machine-api-aws-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token kind: Secret metadata: name: aws-cloud-credentials namespace: openshift-machine-api type: Opaque 18.5.1.2. Running the installer Run the OpenShift Container Platform installer: USD ./openshift-install create cluster 18.5.1.3. Verifying the installation Connect to the OpenShift Container Platform cluster. Verify that the cluster does not have root credentials: USD oc get secrets -n kube-system aws-creds The output should look similar to: Error from server (NotFound): secrets "aws-creds" not found Verify that the components are assuming the IAM roles that are specified in the secret manifests, instead of using credentials that are created by the CCO: Example command with the Image Registry Operator USD oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode The output should show the role and web identity token that are used by the component and look similar to: Example output with the Image Registry Operator [default] role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
|
[
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: aws-creds stringData: aws_access_key_id: <base64-encoded_access_key_id> aws_secret_access_key: <base64-encoded_secret_access_key>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: azure-credentials stringData: azure_subscription_id: <base64-encoded_subscription_id> azure_client_id: <base64-encoded_client_id> azure_client_secret: <base64-encoded_client_secret> azure_tenant_id: <base64-encoded_tenant_id> azure_resource_prefix: <base64-encoded_resource_prefix> azure_resourcegroup: <base64-encoded_resource_group> azure_region: <base64-encoded_region>",
"cat .openshift_install_state.json | jq '.\"*installconfig.ClusterID\".InfraID' -r",
"mycluster-2mpcn",
"azure_resource_prefix: mycluster-2mpcn azure_resourcegroup: mycluster-2mpcn-rg",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: gcp-credentials stringData: service_account.json: <base64-encoded_service_account>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: openstack-credentials data: clouds.yaml: <base64-encoded_cloud_creds> clouds.conf: <base64-encoded_cloud_creds_init>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: ovirt-credentials data: ovirt_url: <base64-encoded_url> ovirt_username: <base64-encoded_username> ovirt_password: <base64-encoded_password> ovirt_insecure: <base64-encoded_insecure> ovirt_ca_bundle: <base64-encoded_ca_bundle>",
"apiVersion: v1 kind: Secret metadata: namespace: kube-system name: vsphere-creds data: vsphere.openshift.example.com.username: <base64-encoded_username> vsphere.openshift.example.com.password: <base64-encoded_password>",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 data: aws_access_key_id: <base64-encoded-access-key-id> aws_secret_access_key: <base64-encoded-secret-access-key>",
"apiVersion: v1 kind: Secret metadata: namespace: <target-namespace> 1 name: <target-secret-name> 2 stringData: credentials: |- [default] role_name: <operator-role-name> 3 web_identity_token_file: <path-to-token> 4",
"openssl genrsa -out sa-signer 4096",
"openssl rsa -in sa-signer -pubout -out sa-signer.pub",
"aws s3api create-bucket --bucket <oidc_bucket_name> --region <aws_region> --create-bucket-configuration LocationConstraint=<aws_region>",
"OPENID_BUCKET_URL=\"https://<oidc_bucket_name>.s3.<aws_region>.amazonaws.com\"",
"{ \"keys\": [ { \"use\": \"sig\", \"kty\": \"RSA\", \"kid\": \"<public_signing_key_id>\", \"alg\": \"RS256\", \"n\": \"<public_signing_key_modulus>\", \"e\": \"<public_signing_key_exponent>\" } ] }",
"openssl rsa -in sa-signer.pub -pubin --outform DER | openssl dgst -binary -sha256 | openssl base64 | tr '/+' '_-' | tr -d '='",
"openssl rsa -pubin -in sa-signer.pub -modulus -noout | sed -e 's/Modulus=//' | xxd -r -p | base64 -w0 | tr '/+' '_-' | tr -d '='",
"printf \"%016x\" USD(openssl rsa -pubin -in sa-signer.pub -noout -text | grep Exponent | awk '{ print USD2 }') | awk '{ sub(/(00)+/, \"\", USD1); print USD1 }' | xxd -r -p | base64 -w0 | tr '/+' '_-' | tr -d '='",
"{ \"issuer\": \"USDOPENID_BUCKET_URL\", \"jwks_uri\": \"USD{OPENID_BUCKET_URL}/keys.json\", \"response_types_supported\": [ \"id_token\" ], \"subject_types_supported\": [ \"public\" ], \"id_token_signing_alg_values_supported\": [ \"RS256\" ], \"claims_supported\": [ \"aud\", \"exp\", \"sub\", \"iat\", \"iss\", \"sub\" ] }",
"aws s3api put-object --bucket <oidc_bucket_name> --key keys.json --body ./keys.json",
"aws s3api put-object --bucket <oidc_bucket_name> --key '.well-known/openid-configuration' --body ./openid-configuration",
"aws s3api put-object-acl --bucket <oidc_bucket_name> --key keys.json --acl public-read",
"aws s3api put-object-acl --bucket <oidc_bucket_name> --key '.well-known/openid-configuration' --acl public-read",
"echo | openssl s_client -servername USD<oidc_bucket_name>.s3.USD<aws_region>.amazonaws.com -connect USD<oidc_bucket_name>.s3.USD<aws_region>.amazonaws.com:443 -showcerts 2>/dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out=\"cert\"a\".pem\"; print >out}'",
"export BUCKET_FINGERPRINT=USD(openssl x509 -in cert<number>.pem -fingerprint -noout | sed -e 's/.*Fingerprint=//' -e 's/://g')",
"aws iam create-open-id-connect-provider --url USDOPENID_BUCKET_URL --thumbprint-list USDBUCKET_FINGERPRINT --client-id-list openshift sts.amazonaws.com",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.<y>.<z>-x86_64 --credentials-requests --cloud=aws",
"{ \"Role\": { \"Path\": \"/\", \"RoleName\": \"openshift-machine-api-aws-cloud-credentials\", \"RoleId\": \"ARSOMEROLEID\", \"Arn\": \"arn:aws:iam::123456789012:role/openshift-machine-api-aws-cloud-credentials\", \"CreateDate\": \"2021-01-06T15:54:13Z\", \"AssumeRolePolicyDocument\": { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"<aws_iam_openid_arn>\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<oidc_bucket_name>.s3.<aws_region>.amazonaws.com/USDBUCKET_NAME:aud\": \"openshift\" } } } ] }, \"Description\": \"OpenShift role for openshift-machine-api/aws-cloud-credentials\", \"MaxSessionDuration\": 3600, \"RoleLastUsed\": { \"LastUsedDate\": \"2021-02-03T02:51:24Z\", \"Region\": \"<aws_region>\" } } }",
"\"Condition\": { \"StringEquals\": { \"<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-image-registry:registry\", \"system:serviceaccount:openshift-image-registry:cluster-image-registry-operator\" ] } }",
"\"Condition\": { \"StringEquals\": { \"<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-ingress-operator:ingress-operator\" ] } }",
"\"Condition\": { \"StringEquals\": { \"<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator\", \"system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-controller-sa\" ] } }",
"\"Condition\": { \"StringEquals\": { \"<oidc_bucket_name>.s3.<aws_region>.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-machine-api:machine-api-controllers\" ] } }",
"{ \"RoleName\": \"openshift-machine-api-aws-cloud-credentials\", \"PolicyName\": \"openshift-machine-api-aws-cloud-credentials\", \"PolicyDocument\": { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:CreateTags\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeDhcpOptions\", \"ec2:DescribeImages\", \"ec2:DescribeInstances\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSubnets\", \"ec2:DescribeVpcs\", \"ec2:RunInstances\", \"ec2:TerminateInstances\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:RegisterTargets\", \"iam:PassRole\", \"iam:CreateServiceLinkedRole\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"kms:Decrypt\", \"kms:Encrypt\", \"kms:GenerateDataKey\", \"kms:GenerateDataKeyWithoutPlainText\", \"kms:DescribeKey\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"kms:RevokeGrant\", \"kms:CreateGrant\", \"kms:ListGrants\" ], \"Resource\": \"*\", \"Condition\": { \"Bool\": { \"kms:GrantIsForAWSResource\": true } } } ] } }",
"./openshift-install create install-config",
"echo \"credentialsMode: Manual\" >> install-config.yaml",
"./openshift-install create manifests",
"mkdir tls ; cp <path_to_service_account_signer> ./tls/bound-service-account-signing-key.key",
"cat << EOF > manifests/cluster-authentication-02-config.yaml apiVersion: config.openshift.io/v1 kind: Authentication metadata: name: cluster spec: serviceAccountIssuer: USDOPENID_BUCKET_URL EOF",
"cat manifests/openshift-machine-api-aws-cloud-credentials-credentials.yaml apiVersion: v1 stringData: credentials: |- [default] role_arn = arn:aws:iam::123456789012:role/openshift-machine-api-aws-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token kind: Secret metadata: name: aws-cloud-credentials namespace: openshift-machine-api type: Opaque",
"./openshift-install create cluster",
"oc get secrets -n kube-system aws-creds",
"Error from server (NotFound): secrets \"aws-creds\" not found",
"oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode",
"[default] role_arn = arn:aws:iam::123456789:role/openshift-image-registry-installer-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/authentication_and_authorization/managing-cloud-provider-credentials
|
Chapter 22. Removing storage devices
|
Chapter 22. Removing storage devices You can safely remove a storage device from a running system, which helps prevent system memory overload and data loss. Do not remove a storage device on a system where: Free memory is less than 5% of the total memory in more than 10 samples per 100. Swapping is active (non-zero si and so columns in the vmstat command output). Prerequisites Before you remove a storage device, ensure that you have enough free system memory due to the increased system memory load during an I/O flush. Use the following commands to view the current memory load and free memory of the system: 22.1. Safe removal of storage devices Safely removing a storage device from a running system requires a top-to-bottom approach. Start from the top layer, which typically is an application or a file system, and work towards the bottom layer, which is the physical device. You can use storage devices in multiple ways, and they can have different virtual configurations on top of physical devices. For example, you can group multiple instances of a device into a multipath device, make it part of a RAID, or you can make it part of an LVM group. Additionally, devices can be accessed via a file system, or they can be accessed directly such as a "raw" device. While using the top-to-bottom approach, you must ensure that: the device that you want to remove is not in use all pending I/O to the device is flushed the operating system is not referencing the storage device 22.2. Removing block devices and associated metadata To safely remove a block device from a running system, to help prevent system memory overload and data loss you need to first remove metadata from them. Address each layer in the stack, starting with the file system, and proceed to the disk. These actions prevent putting your system into an inconsistent state. Use specific commands that may vary depending on what type of devices you are removing: lvremove , vgremove and pvremove are specific to LVM. For software RAID, run mdadm to remove the array. For more information, see Managing RAID . For block devices encrypted using LUKS, there are specific additional steps. The following procedure will not work for the block devices encrypted using LUKS. For more information, see Encrypting block devices using LUKS . Warning Rescanning the SCSI bus or performing any other action that changes the state of the operating system, without following the procedure documented here can cause delays due to I/O timeouts, devices to be removed unexpectedly, or data loss. Prerequisites You have an existing block device stack containing the file system, the logical volume, and the volume group. You ensured that no other applications or services are using the device that you want to remove. You backed up the data from the device that you want to remove. Optional: If you want to remove a multipath device, and you are unable to access its path devices, disable queueing of the multipath device by running the following command: This enables the I/O of the device to fail, allowing the applications that are using the device to shut down. Note Removing devices with their metadata one layer at a time ensures no stale signatures remain on the disk. Procedure Unmount the file system: Remove the file system: If you have added an entry into the /etc/fstab file to make a persistent association between the file system and a mount point, edit /etc/fstab at this point to remove that entry. Continue with the following steps, depending on the type of the device you want to remove: Remove the logical volume (LV) that contained the file system: If there are no other logical volumes remaining in the volume group (VG), you can safely remove the VG that contained the device: Remove the physical volume (PV) metadata from the PV device(s): Remove the partitions that contained the PVs: Remove the partition table if you want to fully wipe the device: Execute the following steps only if you want to physically remove the device: If you are removing a multipath device, execute the following commands: View all the paths to the device: The output of this command is required in a later step. Flush the I/O and remove the multipath device: If the device is not configured as a multipath device, or if the device is configured as a multipath device and you have previously passed I/O to the individual paths, flush any outstanding I/O to all device paths that are used: This is important for devices accessed directly where the umount or vgreduce commands do not flush the I/O. If you are removing a SCSI device, execute the following commands: Remove any reference to the path-based name of the device, such as /dev/sd , /dev/disk/by-path , or the major:minor number, in applications, scripts, or utilities on the system. This ensures that different devices added in the future are not mistaken for the current device. Remove each path to the device from the SCSI subsystem: Here the device-name is retrieved from the output of the multipath -l command, if the device was previously used as a multipath device. Remove the physical device from a running system. Note that the I/O to other devices does not stop when you remove this device. Verification Verify that the devices you intended to remove are not displaying on the output of lsblk command. The following is an example output: Additional resources multipath(8) , pvremove(8) , vgremove(8) , lvremove(8) , wipefs(8) , parted(8) , blockdev(8) and umount(8) man pages on your system
|
[
"vmstat 1 100 free",
"multipathd disablequeueing map multipath-device",
"umount /mnt/mount-point",
"wipefs -a /dev/vg0/myvol",
"lvremove vg0/myvol",
"vgremove vg0",
"pvremove /dev/sdc1",
"wipefs -a /dev/sdc1",
"parted /dev/sdc rm 1",
"wipefs -a /dev/sdc",
"multipath -l",
"multipath -f multipath-device",
"blockdev --flushbufs device",
"echo 1 > /sys/block/ device-name /device/delete",
"lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 5G 0 disk sr0 11:0 1 1024M 0 rom vda 252:0 0 10G 0 disk |-vda1 252:1 0 1M 0 part |-vda2 252:2 0 100M 0 part /boot/efi `-vda3 252:3 0 9.9G 0 part /"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/removing-storage-devices_managing-storage-devices
|
Chapter 1. Getting started with Session Recording on RHEL
|
Chapter 1. Getting started with Session Recording on RHEL 1.1. Session Recording in RHEL The Session Recording solution in Red Hat Enterprise Linux 8 is based on the tlog package. You can use the tlog package and its associated web console session player to record and play back user terminal sessions. You can configure the recording to take place per user or user group via the SSSD service. All terminal input and output is captured and stored in a text-based format in the system journal. Important To not intercept raw passwords and other sensitive information, recording of the terminal input is disabled by default. Be aware that if you turn on recording of the terminal input, all entered passwords are captured in plaintext. You can use this solution for auditing user sessions on security-sensitive systems or, in the event of a security breach, reviewing recorded sessions as part of forensic analysis. As an administrator, you can configure session recording locally on RHEL 8 systems. You can review the recorded sessions from the web console interface or in a terminal using the tlog-play command. 1.2. Components of Session Recording There are three main components to the Session Recording solution: the tlog utility, the SSSD service and a web console embedded user interface. tlog The tlog utility is a terminal input/output (I/O) recording and playback program. It inserts the tlog-rec-session tool between the user terminal and the user shell, and logs everything that passes through as JSON messages. SSSD The System Security Services Daemon (SSSD) service provides a set of daemons to manage access to remote directories and authentication mechanisms. When configuring session recording, you can use SSSD to specify which users or user groups to record. You can configure these settings from a command-line interface (CLI) or from the RHEL 8 web console interface. The RHEL 8 web console embedded interface The Session Recording page is part of the RHEL 8 web console interface and you can use it to manage recorded sessions. Important You need administrator privileges to access the recorded sessions. 1.3. Limitations of Session Recording These are the most notable limitations of the Session Recording solution. Recordings of root user are not reliable, because the root user can circumvent the recording process. Session recording does not record the terminal in a GNOME 3 graphical session. Recording terminals in graphical sessions is not supported because a graphical session has a single audit session ID for all terminals and tlog is unable to distinguish between the terminals and prevent repeated recordings. If session recording is configured to log to the journal , the recorded user will see the act of recording the results of viewing the system journal or /var/log/messages . Because viewing generates logs, which then print to the screen, this causes Session Recording to record this action, which generates more records, causing a loop of flooded output. You can use the following command to work around this problem: You can also configure tlog to limit the output. For details, see tlog-rec or tlog-rec-session manual pages. To record users executing remote access commands, you must configure session recording for that user on the target host. For example, to record the following remote access command, you need to configure session recording for the admin user on the client host: All recordings are lost on reboot because the journal is stored in-memory by default on RHEL 8. To export recordings see Exporting recorded sessions to a file .
|
[
"journalctl -f | grep -v 'tlog-rec-session'",
"ssh admin@client rm -f /some/file"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/recording_sessions/getting-started-with-session-recording_getting-started-with-session-recording
|
Chapter 1. Preparing to deploy OpenShift Data Foundation
|
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See Planning your deployment . For Red Hat Enterprise Linux based hosts for worker nodes in a user provisioned infrastructure (UPI), enable the container access to the underlying file system. Follow the instructions on enable file system access for containers on Red Hat Enterprise Linux based nodes . Note Skip this step for Red Hat Enterprise Linux CoreOS (RHCOS). Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS): Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Minimum starting node requirements [Technology Preview] An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Regional-DR requirements and RHACM requirements . For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices . These are not applicable for deployment using dynamic storage devices. 1.1. Enabling file system access for containers on Red Hat Enterprise Linux based nodes Deploying OpenShift Data Foundation on an OpenShift Container Platform with worker nodes on a Red Hat Enterprise Linux base in a user provisioned infrastructure (UPI) does not automatically provide container access to the underlying Ceph file system. Note Skip this step for hosts based on Red Hat Enterprise Linux CoreOS (RHCOS). Procedure Log in to the Red Hat Enterprise Linux based node and open a terminal. For each node in your cluster: Verify that the node has access to the rhel-7-server-extras-rpms repository. If you do not see both rhel-7-server-rpms and rhel-7-server-extras-rpms in the output, or if there is no output, run the following commands to enable each repository: Install the required packages. Persistently enable container use of the Ceph file system in SELinux. 1.2. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy. 1.3. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available to be used by OpenShift Data Foundation. The devices you use must be empty; the disks must not include physical volumes (PVs), volume groups (VGs), or logical volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide. Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Regional-DR requirements and RHACM requirements . Arbiter stretch cluster requirements [Technology Preview] In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This is a technology preview feature that is currently intended for deployment in the OpenShift Container Platform on-premises. For detailed requirements and instructions, see Configuring OpenShift Data Foundation for Metro-DR stretch cluster . Note Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones. Minimum starting node requirements [Technology Preview] An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. For more information, see Resource requirements section in the Planning guide.
|
[
"subscription-manager repos --list-enabled | grep rhel-7-server",
"subscription-manager repos --enable=rhel-7-server-rpms",
"subscription-manager repos --enable=rhel-7-server-extras-rpms",
"yum install -y policycoreutils container-selinux",
"setsebool -P container_use_cephfs on",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_on_vmware_vsphere/preparing_to_deploy_openshift_data_foundation
|
Chapter 4. Configuring the instrumentation
|
Chapter 4. Configuring the instrumentation The Red Hat build of OpenTelemetry Operator uses an Instrumentation custom resource that defines the configuration of the instrumentation. 4.1. Auto-instrumentation in the Red Hat build of OpenTelemetry Operator Auto-instrumentation in the Red Hat build of OpenTelemetry Operator can automatically instrument an application without manual code changes. Developers and administrators can monitor applications with minimal effort and changes to the existing codebase. Auto-instrumentation runs as follows: The Red Hat build of OpenTelemetry Operator injects an init-container, or a sidecar container for Go, to add the instrumentation libraries for the programming language of the instrumented application. The Red Hat build of OpenTelemetry Operator sets the required environment variables in the application's runtime environment. These variables configure the auto-instrumentation libraries to collect traces, metrics, and logs and send them to the appropriate OpenTelemetry Collector or another telemetry backend. The injected libraries automatically instrument your application by connecting to known frameworks and libraries, such as web servers or database clients, to collect telemetry data. The source code of the instrumented application is not modified. Once the application is running with the injected instrumentation, the application automatically generates telemetry data, which is sent to a designated OpenTelemetry Collector or an external OTLP endpoint for further processing. Auto-instrumentation enables you to start collecting telemetry data quickly without having to manually integrate the OpenTelemetry SDK into your application code. However, some applications might require specific configurations or custom manual instrumentation. 4.2. OpenTelemetry instrumentation configuration options The Red Hat build of OpenTelemetry can inject and configure the OpenTelemetry auto-instrumentation libraries into your workloads. Currently, the project supports injection of the instrumentation libraries from Go, Java, Node.js, Python, .NET, and the Apache HTTP Server ( httpd ). Important The Red Hat build of OpenTelemetry Operator only supports the injection mechanism of the instrumentation libraries but does not support instrumentation libraries or upstream images. Customers can build their own instrumentation images or use community images. 4.2.1. Instrumentation options Instrumentation options are specified in an Instrumentation custom resource (CR). Sample Instrumentation CR apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: "20" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: "0.25" java: env: - name: OTEL_JAVAAGENT_DEBUG value: "true" Table 4.1. Parameters used by the Operator to define the Instrumentation Parameter Description Values env Common environment variables to define across all the instrumentations. exporter Exporter configuration. propagators Propagators defines inter-process context propagation configuration. tracecontext , baggage , b3 , b3multi , jaeger , ottrace , none resource Resource attributes configuration. sampler Sampling configuration. apacheHttpd Configuration for the Apache HTTP Server instrumentation. dotnet Configuration for the .NET instrumentation. go Configuration for the Go instrumentation. java Configuration for the Java instrumentation. nodejs Configuration for the Node.js instrumentation. python Configuration for the Python instrumentation. Table 4.2. Default protocol for auto-instrumentation Auto-instrumentation Default protocol Java 1.x otlp/grpc Java 2.x otlp/http Python otlp/http .NET otlp/http Go otlp/http Apache HTTP Server otlp/grpc 4.2.2. Configuration of the OpenTelemetry SDK variables You can use the instrumentation.opentelemetry.io/inject-sdk annotation in the OpenTelemetry Collector custom resource to instruct the Red Hat build of OpenTelemetry Operator to inject some of the following OpenTelemetry SDK environment variables, depending on the Instrumentation CR, into your pod: OTEL_SERVICE_NAME OTEL_TRACES_SAMPLER OTEL_TRACES_SAMPLER_ARG OTEL_PROPAGATORS OTEL_RESOURCE_ATTRIBUTES OTEL_EXPORTER_OTLP_ENDPOINT OTEL_EXPORTER_OTLP_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE OTEL_EXPORTER_OTLP_CLIENT_KEY Table 4.3. Values for the instrumentation.opentelemetry.io/inject-sdk annotation Value Description "true" Injects the Instrumentation resource with the default name from the current namespace. "false" Injects no Instrumentation resource. "<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from the current namespace. "<namespace>/<instrumentation_name>" Specifies the name of the Instrumentation resource to inject from another namespace. 4.2.3. Exporter configuration Although the Instrumentation custom resource supports setting up one or more exporters per signal, auto-instrumentation configures only the OTLP Exporter. So you must configure the endpoint to point to the OTLP Receiver on the Collector. Sample exporter TLS CA configuration using a config map apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the config map. The config map must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the config map or the absolute path to the certificate if the certificate is already present in the workload file system. Sample exporter mTLS configuration using a Secret apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # ... spec # ... exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5 # ... 1 Specifies the OTLP endpoint using the HTTPS scheme and TLS. 2 Specifies the name of the Secret for the ca_file , cert_file , and key_file values. The Secret must already exist in the namespace of the pod injecting the auto-instrumentation. 3 Points to the CA certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 4 Points to the client certificate in the Secret or the absolute path to the certificate if the certificate is already present in the workload file system. 5 Points to the client key in the Secret or the absolute path to a key if the key is already present in the workload file system. Note You can provide the CA certificate in a config map or Secret. If you provide it in both, the config map takes higher precedence than the Secret. Example configuration for CA bundle injection by using a config map and Instrumentation CR apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: "true" # ... --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt # ... 4.2.4. Configuration of the Apache HTTP Server auto-instrumentation Important The Apache HTTP Server auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 4.4. Parameters for the .spec.apacheHttpd field Name Description Default attrs Attributes specific to the Apache HTTP Server. configPath Location of the Apache HTTP Server configuration. /usr/local/apache2/conf env Environment variables specific to the Apache HTTP Server. image Container image with the Apache SDK and auto-instrumentation. resourceRequirements The compute resource requirements. version Apache HTTP Server version. 2.4 The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-apache-httpd: "true" 4.2.5. Configuration of the .NET auto-instrumentation Important The .NET auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to .NET. image Container image with the .NET SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For the .NET auto-instrumentation, the required OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . The .NET autoinstrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-dotnet: "true" 4.2.6. Configuration of the Go auto-instrumentation Important The Go auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Go. image Container image with the Go SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-go: "true" Additional permissions required for the Go auto-instrumentation in the OpenShift cluster apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - "SYS_PTRACE" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny Tip The CLI command for applying the permissions for the Go auto-instrumentation in the OpenShift cluster is as follows: USD oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account> 4.2.7. Configuration of the Java auto-instrumentation Important The Java auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Java. image Container image with the Java SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-java: "true" 4.2.8. Configuration of the Node.js auto-instrumentation Important The Node.js auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Node.js. image Container image with the Node.js SDK and auto-instrumentation. resourceRequirements The compute resource requirements. The PodSpec annotations to enable injection instrumentation.opentelemetry.io/inject-nodejs: "true" instrumentation.opentelemetry.io/otel-go-auto-target-exe: "/path/to/container/executable" The instrumentation.opentelemetry.io/otel-go-auto-target-exe annotation sets the value for the required OTEL_GO_AUTO_TARGET_EXE environment variable. 4.2.9. Configuration of the Python auto-instrumentation Important The Python auto-instrumentation is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important By default, this feature injects unsupported, upstream instrumentation libraries. Name Description env Environment variables specific to Python. image Container image with the Python SDK and auto-instrumentation. resourceRequirements The compute resource requirements. For Python auto-instrumentation, the OTEL_EXPORTER_OTLP_ENDPOINT environment variable must be set if the endpoint of the exporters is set to 4317 . Python auto-instrumentation uses http/proto by default, and the telemetry data must be set to the 4318 port. The PodSpec annotation to enable injection instrumentation.opentelemetry.io/inject-python: "true" 4.2.10. Multi-container pods The instrumentation is run on the first container that is available by default according to the pod specification. In some cases, you can also specify target containers for injection. Pod annotation instrumentation.opentelemetry.io/container-names: "<container_1>,<container_2>" Note The Go auto-instrumentation does not support multi-container auto-instrumentation injection. 4.2.11. Multi-container pods with multiple instrumentations Injecting instrumentation for an application language to one or more containers in a multi-container pod requires the following annotation: instrumentation.opentelemetry.io/<application_language>-container-names: "<container_1>,<container_2>" 1 1 You can inject instrumentation for only one language per container. For the list of supported <application_language> values, see the following table. Table 4.5. Supported values for the <application_language> Language Value for <application_language> ApacheHTTPD apache DotNet dotnet Java java NGINX inject-nginx NodeJS nodejs Python python SDK sdk 4.2.12. Using the instrumentation CR with Service Mesh When using the instrumentation custom resource (CR) with Red Hat OpenShift Service Mesh, you must use the b3multi propagator.
|
[
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: java-instrumentation spec: env: - name: OTEL_EXPORTER_OTLP_TIMEOUT value: \"20\" exporter: endpoint: http://production-collector.observability.svc.cluster.local:4317 propagators: - w3c sampler: type: parentbased_traceidratio argument: \"0.25\" java: env: - name: OTEL_JAVAAGENT_DEBUG value: \"true\"",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: configMapName: ca-bundle 2 ca_file: service-ca.crt 3",
"apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation spec exporter: endpoint: https://production-collector.observability.svc.cluster.local:4317 1 tls: secretName: serving-certs 2 ca_file: service-ca.crt 3 cert_file: tls.crt 4 key_file: tls.key 5",
"apiVersion: v1 kind: ConfigMap metadata: name: otelcol-cabundle namespace: tutorial-application annotations: service.beta.openshift.io/inject-cabundle: \"true\" --- apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation metadata: name: my-instrumentation spec: exporter: endpoint: https://simplest-collector.tracing-system.svc.cluster.local:4317 tls: configMapName: otelcol-cabundle ca: service-ca.crt",
"instrumentation.opentelemetry.io/inject-apache-httpd: \"true\"",
"instrumentation.opentelemetry.io/inject-dotnet: \"true\"",
"instrumentation.opentelemetry.io/inject-go: \"true\"",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: otel-go-instrumentation-scc allowHostDirVolumePlugin: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - \"SYS_PTRACE\" fsGroup: type: RunAsAny runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny",
"oc adm policy add-scc-to-user otel-go-instrumentation-scc -z <service_account>",
"instrumentation.opentelemetry.io/inject-java: \"true\"",
"instrumentation.opentelemetry.io/inject-nodejs: \"true\" instrumentation.opentelemetry.io/otel-go-auto-target-exe: \"/path/to/container/executable\"",
"instrumentation.opentelemetry.io/inject-python: \"true\"",
"instrumentation.opentelemetry.io/container-names: \"<container_1>,<container_2>\"",
"instrumentation.opentelemetry.io/<application_language>-container-names: \"<container_1>,<container_2>\" 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/red_hat_build_of_opentelemetry/otel-configuration-of-instrumentation
|
Chapter 7. gnome-session
|
Chapter 7. gnome-session The gnome-session program has also been updated in Red Hat Enterprise Linux 7. It starts the GNOME Desktop as it used to; nonetheless, some of its components have changed. gnome-session-properties The gnome-session-properties application is still part of the gnome-session package. However, its functionality has been limited to managing startup programs for individual users, and saving currently running applications when logging out. The latter functionality has been kept from Red Hat Enterprise Linux 6. named session The Save now button is able to save a session in a specific time and to name it. The saved sessions are restored on login. When you click Automatically remember running applications when logging out in gnome-session-properties , the list of saved applications is shown on login as well. With this update, it is also possible to create multiple layouts and rename them, or to be able to select multiple user sessions for one user account. Getting More Information For detailed information on session management, see Chapter 14, Session Management . For information on how to manage startup (autostart) applications for all users, see Section 14.3.5, "Adding an Autostart Application for All Users" .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/gnome-session
|
Installing Red Hat Virtualization as a standalone Manager with remote databases
|
Installing Red Hat Virtualization as a standalone Manager with remote databases Red Hat Virtualization 4.3 ALTERNATIVE method - Installing the Red Hat Virtualization Manager on one server, and its databases on a second server Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document describes how to install a standalone Manager environment - where the Red Hat Virtualization Manager is installed on either a physical server or a virtual machine hosted in another environment - with the Manager database and the Data Warehouse service and database hosted on a remote server. Although you can choose to host one database locally and the other remotely, this document assumes that both databases will be hosted remotely. If this is not the configuration you want to use, see the other Installation Options in the Product Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/index
|
1.2. Apache Subversion (SVN)
|
1.2. Apache Subversion (SVN) Apache Subversion , commonly abbreviated as SVN , is a centralized version control system with a client-server architecture. It is a successor to the older Concurrent Versions System (CVS), preserves the same development model, and addresses problems often encountered with CVS. 1.2.1. Installing and Configuring Subversion Installing the subversion Package In Red Hat Enterprise Linux 6, Subversion is provided by the subversion package. To install the subversion package and all its dependencies on your system, type the following at a shell prompt as root : yum install subversion This installs a command line Subversion client, a Subversion server, and other related tools to the system. Setting Up the Default Editor When using Subversion on the command line, certain commands such as svn import or svn commit require the user to write a short log message. To determine which text editor to start, the svn client application first reads the contents of the environment variable USDSVN_EDITOR , then reads more general environment variables USDVISUAL and USDEDITOR , and if none of these is set, it reports an error. To persistently change the value of the USDSVN_EDITOR environment variable, run the following command: echo " export SVN_EDITOR= command " >> ~/.bashrc This adds the export SVN_EDITOR= command line to your ~/.bashrc file. Replace command with a command that runs the editor of your choice (for example, emacs ). Note that for this change to take effect in the current shell session, you must execute the commands in ~/.bashrc by typing the following at a shell prompt: . ~/.bashrc Example 1.3. Setting up the default text editor To configure the Subversion client to use Emacs as a text editor, type:
|
[
"~]USD echo \"export SVN_EDITOR=emacs\" >> ~/.bashrc ~]USD . ~/.bashrc"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/collaborating.svn
|
5.3. Device Assignment and SR-IOV
|
5.3. Device Assignment and SR-IOV The following diagram demonstrates the involvement of the kernel in the Device Assignment and SR-IOV architectures. Figure 5.2. Device assignment and SR-IOV Device assignment presents the entire device to the guest. SR-IOV needs support in drivers and hardware, including the NIC and the system board and allows multiple virtual devices to be created and passed into different guests. A vendor-specific driver is required in the guest, however, SR-IOV offers the lowest latency of any network option.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-networking-device_assignment_and_sriov
|
Chapter 3. Configuring GitLab Webhooks for automated pipeline triggers
|
Chapter 3. Configuring GitLab Webhooks for automated pipeline triggers Learn how to set up webhooks and secrets in GitLab to automatically trigger pipeline run in RHDH upon code updates. Prerequisites You have an existing GitLab project. You have administrator privileges on OpenShift web console. Procedure Retrieve Webhook URL and Secret Token: Log in to the OpenShift web console with Administrator privileges. Navigate to the rhtap project, expand Pipelines , and then select PipelineRuns . Locate the rhtap-pe-info-<> pipeline run, and then select the Logs tab. Note These logs contain the webhook URL and secret token required for GitLab configuration. Configure Webhook in GitLab: Within your GitLab repository, navigate to Settings > Webhooks . In the URL field, enter the webhook URL copied from Step 1. In the Secret Token field, enter the secret token copied from Step 1. In the Trigger section: Select Push events . Select Merge request events . Click Add Webhook . Verification Push your code changes to the GitLab repository. Navigate to the CI tab in RHDH. Verify that a pipeline run is triggered for your code push. Revised on 2024-07-15 21:00:30 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/customizing_red_hat_trusted_application_pipeline/webhook-configurations-for-gitlab_default
|
8.111. man-pages-ja
|
8.111. man-pages-ja 8.111.1. RHBA-2013:1094 - man-pages-ja bug fix update An updated man-pages-ja package that fixes two bugs is now available for Red Hat Enteprise Linux 6. The man-pages-ja package contains manual pages in Japanese. Bug Fixes BZ#949787 The shmat(2) man page in the release did not mention the EIDRM error code, which could have been returned by the shmat utility. With this update, the EIDRM error code is included in shmat. BZ#957937 The strtoul(3) man page in the release incorrectly mentioned the range of the return value. This update fixes the aforementioned problem. Users of man-pages-ja are advised to upgrade to this updated package, which fixes these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/man-pages-ja
|
Chapter 7. Pulling images from a container repository
|
Chapter 7. Pulling images from a container repository Pull automation execution environments from the automation hub remote registry to make a copy to your local machine. Automation hub provides the podman pull command for each latest automation execution environments in the container repository. You can copy and paste this command into your terminal, or use podman pull to copy an automation execution environments based on an automation execution environments tag. 7.1. Pulling an image You can pull automation execution environments from the automation hub remote registry to make a copy to your local machine. Prerequisites You must have permission to view and pull from a private container repository. Procedure If you are pulling automation execution environments from a password or token-protected registry, create a credential before pulling the automation execution environments. From the navigation panel, select Automation Content Execution Environments . Select your automation execution environments. In the Pull this image entry, click Copy to clipboard . Paste and run the command in your terminal. Verification Run podman images to view images on your local machine. 7.2. Syncing images from a container repository You can pull automation execution environments from the automation hub remote registry to sync an image to your local machine. To sync an automation execution environment from a remote registry, you must first configure a remote registry. Prerequisites You must have permission to view and pull from a private container repository. Procedure From the navigation panel, select Automation Content Execution Environments . Add https://registry.redhat.io to the registry. Add any required credentials to authenticate. Note Some remote registries are aggressive with rate limiting. Set a rate limit under Advanced Options . From the navigation panel, select Automation Content Execution Environments . Click Create execution environment in the page header. Select the registry you want to pull from. The Name field displays the name of the automation execution environments displayed on your local registry. Note The Upstream name field is the name of the image on the remote server. For example, if the upstream name is set to "alpine" and the Name field is "local/alpine", the alpine image is downloaded from the remote and renamed to "local/alpine". Set a list of tags to include or exclude. Syncing automation execution environments with a large number of tags is time consuming and uses a lot of disk space. Additional resources See Red Hat Container Registry Authentication for a list of registries. See the What is Podman? documentation for options to use when pulling images.
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/creating_and_using_execution_environments/pulling-images-container-repository
|
2.2. Logical Volumes
|
2.2. Logical Volumes Volume management creates a layer of abstraction over physical storage, allowing you to create logical storage volumes. This provides much greater flexibility in a number of ways than using physical storage directly. With a logical volume, you are not restricted to physical disk sizes. In addition, the hardware storage configuration is hidden from the software so it can be resized and moved without stopping applications or unmounting file systems. This can reduce operational costs. Logical volumes provide the following advantages over using physical storage directly: Flexible capacity When using logical volumes, file systems can extend across multiple disks, since you can aggregate disks and partitions into a single logical volume. Resizeable storage pools You can extend logical volumes or reduce logical volumes in size with simple software commands, without reformatting and repartitioning the underlying disk devices. Online data relocation To deploy newer, faster, or more resilient storage subsystems, you can move data while your system is active. Data can be rearranged on disks while the disks are in use. For example, you can empty a hot-swappable disk before removing it. Convenient device naming Logical storage volumes can be managed in user-defined groups, which you can name according to your convenience. Disk striping You can create a logical volume that stripes data across two or more disks. This can dramatically increase throughput. Mirroring volumes Logical volumes provide a convenient way to configure a mirror for your data. Volume Snapshots Using logical volumes, you can take device snapshots for consistent backups or to test the effect of changes without affecting the real data. The implementation of these features in LVM is described in the remainder of this document.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/logical_volumes
|
4.4.10. Shrinking Logical Volumes
|
4.4.10. Shrinking Logical Volumes To reduce the size of a logical volume, first unmount the file system. You can then use the lvreduce command to shrink the volume. After shrinking the volume, remount the file system. Warning It is important to reduce the size of the file system or whatever is residing in the volume before shrinking the volume itself, otherwise you risk losing data. Shrinking a logical volume frees some of the volume group to be allocated to other logical volumes in the volume group. The following example reduces the size of logical volume lvol1 in volume group vg00 by 3 logical extents.
|
[
"lvreduce -l -3 vg00/lvol1"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/lv_reduce
|
Updating OpenShift Data Foundation
|
Updating OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.18 Instructions for cluster and storage administrators regarding upgrading Red Hat Storage Documentation Team Abstract This document explains how to update versions of Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.16 and 4.17, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> -> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> -> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> -> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. In OpenShift Data Foundation clusters with disaster recovery (DR) enabled, during upgrade to version 4.18, bluestore-rdr OSDs are migrated to bluestore OSDs. bluestore backed OSDs now provide the same improved performance of bluestore-rdr based OSDs, which is important when the cluster is required to be used for Regional Disaster Recovery. During upgrade you can view the status of the OSD migration. In the OpenShift Web Console, navigate to Storage -> Data Foundation -> Storage System . In the Activity card of the Block and File tab you can view ongoing activities. Migrating cluster OSDs shows the status of the migration from bluestore-rdr to bluestore . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . Chapter 2. OpenShift Data Foundation upgrade channels and releases In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster upgrades. By controlling the pace of upgrades, these upgrade channels allow you to choose an upgrade strategy. As OpenShift Data Foundation gets deployed as an operator in OpenShift Container Platform, it follows the same strategy to control the pace of upgrades by shipping the fixes in multiple channels. Upgrade channels are tied to a minor version of OpenShift Data Foundation. For example, OpenShift Data Foundation 4.18 upgrade channels recommend upgrades within 4.18. Upgrades to future releases is not recommended. This strategy ensures that administrators can explicitly decide to upgrade to the minor version of OpenShift Data Foundation. Upgrade channels control only release selection and do not impact the version of the cluster that you install; the odf-operator decides the version of OpenShift Data Foundation to be installed. By default, it always installs the latest OpenShift Data Foundation release maintaining the compatibility with OpenShift Container Platform. So, on OpenShift Container Platform 4.18, OpenShift Data Foundation 4.18 will be the latest version which can be installed. OpenShift Data Foundation upgrades are tied to the OpenShift Container Platform upgrade to ensure that compatibility and interoperability are maintained with the OpenShift Container Platform. For OpenShift Data Foundation 4.18, OpenShift Container Platform 4.18 and 4.18 (when generally available) are supported. OpenShift Container Platform 4.18 is supported to maintain forward compatibility of OpenShift Data Foundation with OpenShift Container Platform. Keep the OpenShift Data Foundation version the same as OpenShift Container Platform in order to get the benefit of all the features and enhancements in that release. Important Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.15 to 4.17 and then to 4.18. You cannot update from OpenShift Container Platform 4.17 to 4.18 directly. For more information, see Preparing to perform an EUS-to-EUS update of the Updating clusters guide in OpenShift Container Platform documentation. OpenShift Data Foundation 4.18 offers the following upgrade channel: stable-4.18 stable-4.17 stable-4.18 channel Once a new version is Generally Available, the stable channel corresponding to the minor version gets updated with the new image which can be used to upgrade. You can use the stable-4.18 channel to upgrade from OpenShift Data Foundation 4.17 and upgrades within 4.18. stable-4.17 You can use the stable-4.17 channel to upgrade from OpenShift Data Foundation 4.15 and upgrades within 4.17. Chapter 3. Updating Red Hat OpenShift Data Foundation 4.17 to 4.18 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution . Important Upgrading to 4.18 directly from any version older than 4.17 is not supported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.18.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Optional: To reduce the upgrade time for large clusters that are using CSI plugins, make sure to tune the following parameters in the rook-ceph-operator-config configmap to a higher count or percentage. CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE Note By default, the rook-ceph-operator-config configmap is empty and you need to add the data key. This affects CephFS and CephRBD daemonsets and allows the pods to restart simultaneously or be unavailable and reduce the upgrade time. For an optimal value, you can set the parameter values to 20%. However, if the value is too high, disruption for new volumes might be observed during the upgrade. Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS) Add another entry in the trust policy for noobaa-core account as follows: Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/ . Enter the IAM management tool and click Roles . Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI: Search for the role name that you obtained from the step in the tool and click on the role name. Under the role summary, click Trust relationships . In the Trusted entities tab, click Edit trust policy on the right. Under the "Action": "sts:AssumeRoleWithWebIdentity" field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint . Add another entry for the core pod's new service account name, system:serviceaccount:openshift-storage:noobaa-core . Click Update policy at the bottom right of the page. The update might take about 5 minutes to get in place. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.18 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators -> Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . Chapter 4. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads -> Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators -> Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators -> Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . Chapter 5. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators -> Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. Chapter 6. Updating the OpenShift Data Foundation external secret Update the OpenShift Data Foundation external secret after updating to the latest version of OpenShift Data Foundation. Note Updating the external secret is not required for batch updates. For example, when updating from OpenShift Data Foundation 4.17.x to 4.17.y. Prerequisites Update the OpenShift Container Platform cluster to the latest stable release of 4.17.z, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and the data is resilient. Navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. On the Overview - Block and File tab, check the Status card and confirm that the Storage cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. Red Hat Ceph Storage must have a Ceph dashboard installed and configured. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using one of the following methods, either CSV or ConfigMap. Important Downloading the ceph-external-cluster-details-exporter.py python script using CSV will no longer be supported from version OpenShift Data Foundation 4.19 and onward. Using the ConfigMap will be the only supported method. CSV ConfigMap Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. The updated permissions for the user are set as: Run the previously downloaded python script using one of the following options based on the method you used during deployment, either a configuration file or command-line flags. Configuration file Create a config.ini file that includes all of the parameters used during initial deployment. Run the following command to get the configmap output which contains those parameters: Add the parameters from the output to the config.ini file. You can add additional parameters to the config.ini file to those used during deployment. See Table 6.1, "Mandatory and optional parameters used during upgrade" for descriptions of the parameters. Example config.ini file: Run the python script: Replace <config-file> with the path to the config.ini file. Command-line flags Run the previously downloaded python script and pass the parameters for your deployment. Make sure to use all the flags that you used in the original deployment including any optional argument that you have used. You can also add additional flags to those used during deployment. See Table 6.1, "Mandatory and optional parameters used during upgrade" for descriptions of the parameters. Table 6.1. Mandatory and optional parameters used during upgrade Parameter Description rbd-data-pool-name (Mandatory) Used for providing block storage in OpenShift Data Foundation. rgw-endpoint (Optional) Provide this parameter if object storage is to be provisioned through Ceph RADOS Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> . monitoring-endpoint (Optional) Accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. monitoring-endpoint-port (Optional) The port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. run-as-user (Mandatory) The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED). ceph-conf (Optional) The name of the Ceph configuration file. cluster-name (Optional) The Ceph cluster name. output (Optional) The file where the output is required to be stored. cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. dry-run (Optional) This parameter helps to print the executed commands without running them. Save the JSON output generated after running the script in the step. Example output: Upload the generated JSON file. Log in to the OpenShift Web Console. Click Workloads -> Secrets . Set project to openshift-storage . Click rook-ceph-external-cluster-details . Click Actions (...) -> Edit Secret . Click Browse and upload the JSON file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data Foundation -> Storage Systems tab and then click on the storage system name. On the Overview -> Block and File tab, check the Details card to verify that the RHCS dashboard link is available and also check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. If verification steps fail, contact Red Hat Support .
|
[
"oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1 value: arn:aws:iam::123456789101:role/your-role-name-here",
"oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py",
"oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py",
"python3 ceph-external-cluster-details-exporter.py --upgrade",
"client.csi-cephfs-node key: AQCYz0piYgu/IRAAipji4C8+Lfymu9vOrox3zQ== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs = client.csi-cephfs-provisioner key: AQCYz0piDUMSIxAARuGUyhLXFO9u4zQeRG65pQ== caps: [mgr] allow rw caps: [mon] allow r, allow command 'osd blocklist' caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQCYz0pi88IKHhAAvzRN4fD90nkb082ldrTaHA== caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd client.csi-rbd-provisioner key: AQCYz0pi6W8IIBAAgRJfrAW7kZfucNdqJqS9dQ== caps: [mgr] allow rw caps: [mon] profile rbd, allow command 'osd blocklist' caps: [osd] profile rbd",
"oc get configmap -namespace openshift-storage external-cluster-user-command --output jsonpath='{.data.args}'",
"[Configurations] format = bash cephfs-filesystem-name = <filesystem-name> rbd-data-pool-name = <pool_name>",
"python3 ceph-external-cluster-details-exporter.py --config-file <config-file>",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name _<rbd block pool name>_ --monitoring-endpoint _<ceph mgr prometheus exporter endpoint>_ --monitoring-endpoint-port _<ceph mgr prometheus exporter port>_ --rgw-endpoint _<rgw endpoint>_ --run-as-user _<ocs_client_name>_ [optional arguments]",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/updating_openshift_data_foundation/updating-the-openshift-data-foundation-external-secret_rhodf
|
3.2. Growing a File System on a Logical Volume
|
3.2. Growing a File System on a Logical Volume To grow a file system on a logical volume, perform the following steps: Determine whether there is sufficient unallocated space in the existing volume group to extend the logical volume. If not, perform the following procedure: Create a new physical volume with the pvcreate command. Use the vgextend command to extend the volume group that contains the logical volume with the file system you are growing to include the new physical volume. Once the volume group is large enough to include the larger file system, extend the logical volume with the lvresize command. Resize the file system on the logical volume. Note that you can use the -r option of the lvresize command to extend the logical volume and resize the underlying file system with a single command
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/fsgrow_overview
|
Chapter 7. Managing the collection of usage data
|
Chapter 7. Managing the collection of usage data Red Hat OpenShift AI administrators can choose whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Collecting this data allows Red Hat to monitor and improve our software and support. For further details about the data Red Hat collects, see Usage data collection notice for OpenShift AI . Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster except when clusters are installed in a disconnected environment. See Disabling usage data collection for instructions on disabling the collection of this data in your cluster. If you have disabled data collection on your cluster, and you want to enable it again, see Enabling usage data collection for more information. 7.1. Usage data collection notice for OpenShift AI In connection with your use of this Red Hat offering, Red Hat may collect usage data about your use of the software. This data allows Red Hat to monitor the software and to improve Red Hat offerings and support, including identifying, troubleshooting, and responding to issues that impact users. What information does Red Hat collect? Tools within the software monitor various metrics and this information is transmitted to Red Hat. Metrics include information such as: Information about applications enabled in the product dashboard. The deployment sizes used (that is, the CPU and memory resources allocated). Information about documentation resources accessed from the product dashboard. The name of the notebook images used (that is, Minimal Python, Standard Data Science, and other images.). A unique random identifier that generates during the initial user login to associate data to a particular username. Usage information about components, features, and extensions. Third Party Service Providers Red Hat uses certain third party service providers to collect the telemetry data. Security Red Hat employs technical and organizational measures designed to protect the usage data. Personal Data Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such personal information and treat such personal information in accordance with Red Hat's Privacy Statement. For more information about Red Hat's privacy practices, see Red Hat's Privacy Statement . Enabling and Disabling Usage Data You can disable or enable usage data by following the instructions in Disabling usage data collection or Enabling usage data collection . 7.2. Enabling usage data collection Red Hat OpenShift AI administrators can select whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster except when clusters are installed in a disconnected environment. If you have disabled data collection previously, you can re-enable it by following these steps. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Cluster settings . Locate the Usage data collection section. Select the Allow collection of usage data checkbox. Click Save changes . Verification A notification is shown when settings are updated: Settings changes saved. Additional resources Usage data collection notice for OpenShift AI 7.3. Disabling usage data collection Red Hat OpenShift AI administrators can choose whether to allow Red Hat to collect data about OpenShift AI usage in their cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster except when clusters are installed in a disconnected environment. You can disable data collection by following these steps. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Cluster settings . Locate the Usage data collection section. Clear the Allow collection of usage data checkbox. Click Save changes . Verification A notification is shown when settings are updated: Settings changes saved. Additional resources Usage data collection notice for OpenShift AI
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_resources/managing-collection-of-usage-data
|
Chapter 19. Virtualization
|
Chapter 19. Virtualization 19.1. Virtual machines can now be managed using the web console The Virtual Machines page can now be added to the RHEL 8 web console interface, which enables the user to create and manage libvirt-based virtual machines (VMs). In addition, the Virtual Machine Manager ( virt-manager ) application has been deprecated, and may become unsupported in a future major release of RHEL. Note, however, that the web console currently does not provide all of the virtual management features that virt-manager does. For details about the differences in available features between the RHEL 8 web console and the Virtual Machine Manager, see the Configuring and managing virtualization document. 19.2. The Q35 machine type is now supported by virtualization Red hat Enterprise Linux 8 introduces the support for Q35 , a more modern PCI Express-based machine type. This provides a variety of improvements in features and performance of virtual devices, and ensures that a wider range of modern devices are compatible with virtualization. In addition, virtual machines created in Red Hat Enterprise Linux 8 are set to use Q35 by default. Note that the previously default PC machine type has become deprecated and may become unsupported in a future major release of RHEL. However, changing the machine type of existing VMs from PC to Q35 is not recommended. Notable differences between PC and Q35 include: Older operating systems, such as Windows XP, do not support Q35 and will not boot if used on a Q35 VM. Currently, when using RHEL 6 as the operating system on a Q35 VM, hot-plugging a PCI device to that VM in some cases does not work. In addition, certain legacy virtio devices do not work properly on RHEL 6 Q35 VMs. Therefore, using the PC machine type is recommended for RHEL 6 VMs. Q35 emulates PCI Express (PCI-e) buses instead of PCI. As a result, a different device topology and addressing scheme is presented to the guest OS. Q35 has a built-in SATA/AHCI controller, instead of an IDE controller. The SecureBoot feature only works on Q35 VMs. 19.3. Removed virtualization functionality The cpu64-rhel6 CPU model has been deprecated and removed The cpu64-rhel6 QEMU virtual CPU model has been deprecated in RHEL 8.1, and has been removed from RHEL 8.2. It is recommended that you use the other CPU models provided by QEMU and libvirt , according to the CPU present on the host machine. IVSHMEM has been disabled The inter-VM shared memory device (IVSHMEM) feature, which provides shared memory between multiple virtual machines, is now disabled in Red Hat Enterprise Linux 8. A virtual machine configured with this device will fail to boot. Similarly, attempting to hot-plug such a device device will fail as well. virt-install can no longer use NFS locations With this update, the virt-install utility cannot mount NFS locations. As a consequence, attempting to install a virtual machine using virt-install with a NFS address as a value of the --location option fails. To work around this change, mount your NFS share prior to using virt-install , or use a HTTP location. RHEL 8 does not support the tulip driver With this update, the tulip network driver is no longer supported. As a consequence, when using RHEL 8 on a Generation 1 virtual machine (VM) on the Microsoft Hyper-V hypervisor, the "Legacy Network Adapter" device does not work, which causes PXE installation of such VMs to fail. For the PXE installation to work, install RHEL 8 on a Generation 2 Hyper-V VM. If you require a RHEL 8 Generation 1 VM, use ISO installation. LSI Logic SAS and Parallel SCSI drivers are not supported The LSI Logic SAS driver ( mptsas ) and LSI Logic Parallel driver ( mptspi ) for SCSI are no longer supported. As a consequence, the drivers can be used for installing RHEL 8 as a guest operating system on a VMWare hypervisor to a SCSI disk, but the created VM will not be supported by Red Hat. Installing virtio-win no longer creates a floppy disk image with the Windows drivers Due to the limitation of floppy drives, virtio-win drivers are no longer provided as floppy images. Users should use the ISO image instead.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/virtualization_virtualization
|
13.5. The /etc/openldap/schema/ Directory
|
13.5. The /etc/openldap/schema/ Directory The /etc/openldap/schema/ directory holds LDAP definitions, previously located in the slapd.at.conf and slapd.oc.conf files. The /etc/openldap/schema/redhat/ directory holds customized schemas distributed by Red Hat for Red Hat Enterprise Linux. All attribute syntax definitions and objectclass definitions are now located in the different schema files. The various schema files are referenced in /etc/openldap/slapd.conf using include lines, as shown in this example: Warning Do not modify schema items defined in the schema files installed by OpenLDAP. It is possible to extend the schema used by OpenLDAP to support additional attribute types and object classes using the default schema files as a guide. To do this, create a local.schema file in the /etc/openldap/schema/ directory. Reference this new schema within slapd.conf by adding the following line below the default include schema lines: , define new attribute types and object classes within the local.schema file. Many organizations use existing attribute types from the schema files installed by default and add new object classes to the local.schema file. Extending the schema to match certain specialized requirements is quite involved and beyond the scope of this chapter. Refer to http://www.openldap.org/doc/admin/schema.html for information.
|
[
"include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/inetorgperson.schema include /etc/openldap/schema/nis.schema include /etc/openldap/schema/rfc822-MailMember.schema include /etc/openldap/schema/redhat/autofs.schema",
"include /etc/openldap/schema/local.schema"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-ldap-files-schemas
|
Chapter 2. Using Active Directory as an Identity Provider for SSSD
|
Chapter 2. Using Active Directory as an Identity Provider for SSSD The System Security Services Daemon (SSSD) is a system service to access remote directories and authentication mechanisms. It connects a local system (an SSSD client ) to an external back-end system (a domain ). This provides the SSSD client with access to identity and authentication remote services using an SSSD provider. For example, these remote services include: an LDAP directory, an Identity Management (IdM) or Active Directory (AD) domain, or a Kerberos realm. When used as an identity management service for AD integration, SSSD is an alternative to services such as NIS or Winbind. This chapter describes how SSSD works with AD. For more details on SSSD, see the System-Level Authentication Guide . 2.1. How the AD Provider Handles Trusted Domains This section describes how SSSD handles trusted domains if you set id_provider = ad in the /etc/sssd/sssd.conf file. SSSD only supports domains in a single Active Directory forest. If SSSD requires access to multiple domains from multiple forests, consider using IdM with trusts (preferred) or the winbindd service instead of SSSD. By default, SSSD discovers all domains in the forest and, if a request for an object in a trusted domain arrives, SSSD tries to resolve it. If the trusted domains are not reachable or geographically distant, which makes them slow, you can set the ad_enabled_domains parameter in /etc/sssd/sssd.conf to limit from which trusted domains SSSD resolves objects. By default, you must use fully-qualified user names to resolve users from trusted domains.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/sssd-ad
|
Appendix A. Revision History
|
Appendix A. Revision History Revision History Revision 7-7 Mon Aug 5 2019 Vladimir Slavik Release for Red Hat Enterprise Linux 7.7 GA. Revision 7-6 Tue Oct 30 2018 Vladimir Slavik Release for Red Hat Enterprise Linux 7.6 GA. Revision 7-5.1 Mon Jun 18 2018 Radek Biba Use an id for the stap test screen. Revision 7-5 Tue Jan 09 2018 Vladimir Slavik Release for Red Hat Enterprise Linux 7.5 Beta. Revision 7-4 Wed Jul 26 2017 Vladimir Slavik Release for Red Hat Enterprise Linux 7.4. Revision 7-3.9 Tue May 16 2017 Robert Kratky Build for 7.4 Beta release. Revision 1-8 Wed Oct 19 2016 Robert Kratky Release for Red Hat Enterprise Linux 7.3. Revision 1-6 Wed Jan 20 2016 Robert Kratky Async release with many fixes. Revision 1-5 Thu Nov 11 2015 Robert Kratky Release for Red Hat Enterprise Linux 7.2. Revision 0-3 Fri Dec 6 2013 Jacquelynn East Release for Red Hat Enterprise Linux 7.0.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/appe-systemtap_beginners_guide-revision_history
|
5.2.5. Removing Physical Volumes
|
5.2.5. Removing Physical Volumes If a device is no longer required for use by LVM, you can remove the LVM label with the pvremove command. Executing the pvremove command zeroes the LVM metadata on an empty physical volume. If the physical volume you want to remove is currently part of a volume group, you must remove it from the volume group with the vgreduce command, as described in Section 5.3.7, "Removing Physical Volumes from a Volume Group" .
|
[
"pvremove /dev/ram15 Labels on physical volume \"/dev/ram15\" successfully wiped"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/PV_remove
|
Chapter 15. Azure Storage Blob Sink
|
Chapter 15. Azure Storage Blob Sink Upload data to Azure Storage Blob. Important The Azure Storage Blob Sink Kamelet is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The Kamelet expects the following headers to be set: file / ce-file : as the file name to upload If the header won't be set the exchange ID will be used as file name. 15.1. Configuration Options The following table summarizes the configuration options available for the azure-storage-blob-sink Kamelet: Property Name Description Type Default Example accessKey * Access Key The Azure Storage Blob access Key. string accountName * Account Name The Azure Storage Blob account name. string containerName * Container Name The Azure Storage Blob container name. string credentialType Credential Type Determines the credential strategy to adopt. Possible values are SHARED_ACCOUNT_KEY, SHARED_KEY_CREDENTIAL and AZURE_IDENTITY string "SHARED_ACCOUNT_KEY" operation Operation Name The operation to perform. string "uploadBlockBlob" Note Fields marked with an asterisk (*) are mandatory. 15.2. Dependencies At runtime, the azure-storage-blob-sink Kamelet relies upon the presence of the following dependencies: camel:azure-storage-blob camel:kamelet 15.3. Usage This section describes how you can use the azure-storage-blob-sink . 15.3.1. Knative Sink You can use the azure-storage-blob-sink Kamelet as a Knative sink by binding it to a Knative object. azure-storage-blob-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" 15.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 15.3.1.2. Procedure for using the cluster CLI Save the azure-storage-blob-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f azure-storage-blob-sink-binding.yaml 15.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel azure-storage-blob-sink -p "sink.accessKey=The Access Key" -p "sink.accountName=The Account Name" -p "sink.containerName=The Container Name" This command creates the KameletBinding in the current namespace on the cluster. 15.3.2. Kafka Sink You can use the azure-storage-blob-sink Kamelet as a Kafka sink by binding it to a Kafka topic. azure-storage-blob-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: "The Access Key" accountName: "The Account Name" containerName: "The Container Name" 15.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 15.3.2.2. Procedure for using the cluster CLI Save the azure-storage-blob-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f azure-storage-blob-sink-binding.yaml 15.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-blob-sink -p "sink.accessKey=The Access Key" -p "sink.accountName=The Account Name" -p "sink.containerName=The Container Name" This command creates the KameletBinding in the current namespace on the cluster. 15.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/azure-storage-blob-sink.kamelet.yaml
|
[
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" containerName: \"The Container Name\"",
"apply -f azure-storage-blob-sink-binding.yaml",
"kamel bind channel:mychannel azure-storage-blob-sink -p \"sink.accessKey=The Access Key\" -p \"sink.accountName=The Account Name\" -p \"sink.containerName=The Container Name\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: azure-storage-blob-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: azure-storage-blob-sink properties: accessKey: \"The Access Key\" accountName: \"The Account Name\" containerName: \"The Container Name\"",
"apply -f azure-storage-blob-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic azure-storage-blob-sink -p \"sink.accessKey=The Access Key\" -p \"sink.accountName=The Account Name\" -p \"sink.containerName=The Container Name\""
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/azure-storage-blob-sink
|
Chapter 3. Configuring Red Hat Quay
|
Chapter 3. Configuring Red Hat Quay Before running the Red Hat Quay service as a container, you need to use that same Quay container to create the configuration file ( config.yaml ) needed to deploy Red Hat Quay. To do that, you pass a config argument and a password (replace my-secret-password here) to the Quay container. Later, you use that password to log into the configuration tool as the user quayconfig . Here's an example of how to do that: Start quay in setup mode : On the first quay node, run the following: Open browser : When the quay configuration tool starts up, open a browser to the URL and port 8080 of the system you are running the configuration tool on (for example http://myquay.example.com:8080 ). You are prompted for a username and password. Log in as quayconfig : When prompted, enter the quayconfig username and password (the one from the podman run command line). Fill in the required fields : When you start the config tool without mounting an existing configuration bundle, you will be booted into an initial setup session. In a setup session, default values will be filled automatically. The following steps will walk through how to fill out the remaining required fields. Identify the database : For the initial setup, you must include the following information about the type and location of the database to be used by Red Hat Quay: Database Type : Choose MySQL or PostgreSQL. MySQL will be used in the basic example; PostgreSQL is used with the high availability Red Hat Quay on OpenShift examples. Database Server : Identify the IP address or hostname of the database, along with the port number if it is different from 3306. Username : Identify a user with full access to the database. Password : Enter the password you assigned to the selected user. Database Name : Enter the database name you assigned when you started the database server. SSL Certificate : For production environments, you should provide an SSL certificate to connect to the database. The following figure shows an example of the screen for identifying the database used by Red Hat Quay: Identify the Redis hostname, Server Configuration and add other desired settings : Other setting you can add to complete the setup are as follows. More settings for high availability Red Hat Quay deployment that for the basic deployment: For the basic, test configuration, identifying the Redis Hostname should be all you need to do. However, you can add other features, such as Clair Scanning and Repository Mirroring, as described at the end of this procedure. For the high availability and OpenShift configurations, more settings are needed (as noted below) to allow for shared storage, secure communications between systems, and other features. Here are the settings you need to consider: Custom SSL Certificates : Upload custom or self-signed SSL certificates for use by Red Hat Quay. See Using SSL to protect connections to Red Hat Quay for details. Recommended for high availability. Important Using SSL certificates is recommended for both basic and high availability deployments. If you decide to not use SSL, you must configure your container clients to use your new Red Hat Quay setup as an insecure registry as described in Test an Insecure Registry . Basic Configuration : Upload a company logo to rebrand your Red Hat Quay registry. Server Configuration : Hostname or IP address to reach the Red Hat Quay service, along with TLS indication (recommended for production installations). The Server Hostname is required for all Red Hat Quay deployments. TLS termination can be done in two different ways: On the instance itself, with all TLS traffic governed by the nginx server in the Quay container (recommended). On the load balancer. This is not recommended. Access to Red Hat Quay could be lost if the TLS setup is not done correctly on the load balancer. Data Consistency Settings : Select to relax logging consistency guarantees to improve performance and availability. Time Machine : Allow older image tags to remain in the repository for set periods of time and allow users to select their own tag expiration times. redis : Identify the hostname or IP address (and optional password) to connect to the redis service used by Red Hat Quay. Repository Mirroring : Choose the checkbox to Enable Repository Mirroring. With this enabled, you can create repositories in your Red Hat Quay cluster that mirror selected repositories from remote registries. Before you can enable repository mirroring, start the repository mirroring worker as described later in this procedure. Registry Storage : Identify the location of storage. A variety of cloud and local storage options are available. Remote storage is required for high availability. Identify the Ceph storage location if you are following the example for Red Hat Quay high availability storage. On OpenShift, the example uses Amazon S3 storage. Action Log Storage Configuration : Action logs are stored in the Red Hat Quay database by default. If you have a large amount of action logs, you can have those logs directed to Elasticsearch for later search and analysis. To do this, change the value of Action Logs Storage to Elasticsearch and configure related settings as described in Configure action log storage . Action Log Rotation and Archiving : Select to enable log rotation, which moves logs older than 30 days into storage, then indicate storage area. Security Scanner : Enable security scanning by selecting a security scanner endpoint and authentication key. To setup Clair to do image scanning, refer to Clair Setup and Configuring Clair . Recommended for high availability. Application Registry : Enable an additional application registry that includes things like Kubernetes manifests or Helm charts (see the App Registry specification ). rkt Conversion : Allow rkt fetch to be used to fetch images from Red Hat Quay registry. Public and private GPG2 keys are needed. This field is deprecated. E-mail : Enable e-mail to use for notifications and user password resets. Internal Authentication : Change default authentication for the registry from Local Database to LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token. External Authorization (OAuth) : Enable to allow GitHub or GitHub Enterprise to authenticate to the registry. Google Authentication : Enable to allow Google to authenticate to the registry. Access Settings : Basic username/password authentication is enabled by default. Other authentication types that can be enabled include: external application tokens (user-generated tokens used with docker or rkt commands), anonymous access (enable for public access to anyone who can get to the registry), user creation (let users create their own accounts), encrypted client password (require command-line user access to include encrypted passwords), and prefix username autocompletion (disable to require exact username matches on autocompletion). Registry Protocol Settings : Leave the Restrict V1 Push Support checkbox enabled to restrict access to Docker V1 protocol pushes. Although Red Hat recommends against enabling Docker V1 push protocol, if you do allow it, you must explicitly whitelist the namespaces for which it is enabled. Dockerfile Build Support : Enable to allow users to submit Dockerfiles to be built and pushed to Red Hat Quay. This is not recommended for multitenant environments. Validate the changes : Select Validate Configuration Changes . If validation is successful, you will be presented with the following Download Configuration modal: Download configuration : Select the Download Configuration button and save the tarball ( quay-config.tar.gz ) to a local directory to use later to start Red Hat Quay. At this point, you can shutdown the Red Hat Quay configuration tool and close your browser. , copy the tarball file to the system on which you want to install your first Red Hat Quay node. For a basic install, you might just be running Red Hat Quay on the same system.
|
[
"sudo podman run --rm -it --name quay_config -p 8080:8080 registry.redhat.io/quay/quay-rhel8:v3.9.10 config my-secret-password"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploy_red_hat_quay_-_high_availability/configuring_red_hat_quay
|
Chapter 54. PostgreSQL Sink
|
Chapter 54. PostgreSQL Sink Send data to a PostgreSQL Database. This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query: 'INSERT INTO accounts (username,city) VALUES (:#username,:#city)' The Kamelet needs to receive as input something like: '{ "username":"oscerd", "city":"Rome"}' 54.1. Configuration Options The following table summarizes the configuration options available for the postgresql-sink Kamelet: Property Name Description Type Default Example databaseName * Database Name The Database Name we are pointing string password * Password The password to use for accessing a secured PostgreSQL Database string query * Query The Query to execute against the PostgreSQL Database string "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName * Server Name Server Name for the data source string "localhost" username * Username The username to use for accessing a secured PostgreSQL Database string serverPort Server Port Server Port for the data source string 5432 Note Fields marked with an asterisk (*) are mandatory. 54.2. Dependencies At runtime, the postgresql-sink Kamelet relies upon the presence of the following dependencies: camel:jackson camel:kamelet camel:sql mvn:org.postgresql:postgresql mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001 54.3. Usage This section describes how you can use the postgresql-sink . 54.3.1. Knative Sink You can use the postgresql-sink Kamelet as a Knative sink by binding it to a Knative object. postgresql-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: postgresql-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: postgresql-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 54.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 54.3.1.2. Procedure for using the cluster CLI Save the postgresql-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f postgresql-sink-binding.yaml 54.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 54.3.2. Kafka Sink You can use the postgresql-sink Kamelet as a Kafka sink by binding it to a Kafka topic. postgresql-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: postgresql-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: postgresql-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 54.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 54.3.2.2. Procedure for using the cluster CLI Save the postgresql-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f postgresql-sink-binding.yaml 54.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 54.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/postgresql-sink.kamelet.yaml
|
[
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: postgresql-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: postgresql-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"",
"apply -f postgresql-sink-binding.yaml",
"kamel bind channel:mychannel postgresql-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: postgresql-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: postgresql-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"",
"apply -f postgresql-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic postgresql-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\""
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/postgres-sql-sink
|
Chapter 7. Health checks for multi-site deployments
|
Chapter 7. Health checks for multi-site deployments When running the Multi-site deployments in a Kubernetes environment, you should automate checks to see if everything is up and running as expected. This page provides an overview of URLs, Kubernetes resources, and Healthcheck endpoints available to verify a multi-site setup of Red Hat build of Keycloak. 7.1. Overview A proactive monitoring strategy aims to detect and alert about issues before they impact users. This strategy is the key for a highly resilient and highly available Red Hat build of Keycloak application. Health checks across various architectural components (such as application health, load balancing, caching, and overall system status) are critical for: Ensuring high availability Verifying that all sites and the load balancer are operational is a key to ensure that a system can handle requests even if one site goes down. Maintaining performance Checking the health and distribution of the Data Grid cache ensures that Red Hat build of Keycloak can maintain optimal performance by efficiently handling sessions and other temporary data. Operational resilience By continuously monitoring the health of both Red Hat build of Keycloak and its dependencies within the Kubernetes environment, the system can quickly identify and possibly auto-remediate issues, reducing downtime. 7.2. Prerequisites Kubectl CLI is installed and configured . Install jq if it is not already installed on your operating system. 7.3. Specific health checks 7.3.1. Red Hat build of Keycloak load balancer and sites Verifies the health of the Red Hat build of Keycloak application through its load balancer and both primary and backup sites. This ensures that Red Hat build of Keycloak is accessible and that the load balancing mechanism is functioning correctly across different geographical or network locations. This command returns the health status of the Red Hat build of Keycloak application's connection to its configured database, thus confirming the reliability of database connections. This command is available only on the management port and not from the external URL. In a Kubernetes setup, the sub-status health/ready is checked periodically to make the Pod as ready. curl -s https://keycloak:managementport/health This command verifies the lb-check endpoint of the load balancer and ensures the Red Hat build of Keycloak application cluster is up and running. curl -s https://keycloak-load-balancer-url/lb-check These commands will return the running status of the Site A and Site B of the Red Hat build of Keycloak in a multi-site setup. curl -s https://keycloak_site_a_url/lb-check curl -s https://keycloak_site_b_url/lb-check 7.3.2. Data Grid Cache health Check the health of the default cache manager and individual caches in an external Data Grid cluster. This check is vital for Red Hat build of Keycloak performance and reliability, as Data Grid is often used for distributed caching and session clustering in Red Hat build of Keycloak deployments. This command returns the overall health of the Data Grid cache manager, which is useful as the Admin user does not need to provide user credentials to get the health status. curl -s https://infinispan_rest_url/rest/v2/cache-managers/default/health/status In contrast to the preceding health checks, the following health checks require the Admin user to provide the Data Grid user credentials as part of the request to peek into the overall health of the external Data Grid cluster caches. curl -u <infinispan_user>:<infinispan_pwd> -s https://infinispan_rest_url/rest/v2/cache-managers/default/health \ | jq 'if .cluster_health.health_status == "HEALTHY" and (all(.cache_health[].status; . == "HEALTHY")) then "HEALTHY" else "UNHEALTHY" end' The jq filter is a convenience to compute the overall health based on the individual cache health. You can also choose to run the above command without the jq filter to see the full details. 7.3.3. Data Grid Cluster distribution Assesses the distribution health of the Data Grid cluster, ensuring that the cluster's nodes are correctly distributing data. This step is essential for the scalability and fault tolerance of the caching layer. You can modify the expectedCount 3 argument to match the total nodes in the cluster and validate if they are healthy or not. curl <infinispan_user>:<infinispan_pwd> -s https://infinispan_rest_url/rest/v2/cluster\?action\=distribution \ | jq --argjson expectedCount 3 'if map(select(.node_addresses | length > 0)) | length == USDexpectedCount then "HEALTHY" else "UNHEALTHY" end' 7.3.4. Overall, Data Grid system health Uses the oc CLI tool to query the health status of Data Grid clusters and the Red Hat build of Keycloak service in the specified namespace. This comprehensive check ensures that all components of the Red Hat build of Keycloak deployment are operational and correctly configured within the Kubernetes environment. oc get infinispan -n <NAMESPACE> -o json \ | jq '.items[].status.conditions' \ | jq 'map({(.type): .status})' \ | jq 'reduce .[] as USDitem ([]; . + [keys[] | select(USDitem[.] != "True")]) | if length == 0 then "HEALTHY" else "UNHEALTHY: " + (join(", ")) end' 7.3.5. Red Hat build of Keycloak readiness in Kubernetes Specifically, checks for the readiness and rolling update conditions of Red Hat build of Keycloak deployments in Kubernetes, ensuring that the Red Hat build of Keycloak instances are fully operational and not undergoing updates that could impact availability. oc wait --for=condition=Ready --timeout=10s keycloaks.k8s.keycloak.org/keycloak -n <NAMESPACE> oc wait --for=condition=RollingUpdate=False --timeout=10s keycloaks.k8s.keycloak.org/keycloak -n <NAMESPACE>
|
[
"curl -s https://keycloak:managementport/health",
"curl -s https://keycloak-load-balancer-url/lb-check",
"curl -s https://keycloak_site_a_url/lb-check curl -s https://keycloak_site_b_url/lb-check",
"curl -s https://infinispan_rest_url/rest/v2/cache-managers/default/health/status",
"curl -u <infinispan_user>:<infinispan_pwd> -s https://infinispan_rest_url/rest/v2/cache-managers/default/health | jq 'if .cluster_health.health_status == \"HEALTHY\" and (all(.cache_health[].status; . == \"HEALTHY\")) then \"HEALTHY\" else \"UNHEALTHY\" end'",
"curl <infinispan_user>:<infinispan_pwd> -s https://infinispan_rest_url/rest/v2/cluster\\?action\\=distribution | jq --argjson expectedCount 3 'if map(select(.node_addresses | length > 0)) | length == USDexpectedCount then \"HEALTHY\" else \"UNHEALTHY\" end'",
"get infinispan -n <NAMESPACE> -o json | jq '.items[].status.conditions' | jq 'map({(.type): .status})' | jq 'reduce .[] as USDitem ([]; . + [keys[] | select(USDitem[.] != \"True\")]) | if length == 0 then \"HEALTHY\" else \"UNHEALTHY: \" + (join(\", \")) end'",
"wait --for=condition=Ready --timeout=10s keycloaks.k8s.keycloak.org/keycloak -n <NAMESPACE> wait --for=condition=RollingUpdate=False --timeout=10s keycloaks.k8s.keycloak.org/keycloak -n <NAMESPACE>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/high_availability_guide/health-checks-multi-site-
|
Chapter 4. Advisories related to this release
|
Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHBA-2025:0416 RHBA-2025:0417 RHBA-2025:0418 RHBA-2025:0419 RHBA-2025:0420 Revised on 2025-01-30 11:27:05 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.442/openjdk8-442-advisory_openjdk
|
Chapter 2. Configuring routed spine-leaf in the undercloud
|
Chapter 2. Configuring routed spine-leaf in the undercloud This section describes a use case about how to configure the undercloud to accommodate routed spine-leaf with composable networks. 2.1. Configuring the spine leaf provisioning networks To configure the provisioning networks for your spine leaf infrastructure, edit the undercloud.conf file and set the relevant parameters included in the following procedure. Procedure Log in to the undercloud as the stack user. If you do not already have an undercloud.conf file, copy the sample template file: Edit the undercloud.conf file. Set the following values in the [DEFAULT] section: Set local_ip to the undercloud IP on leaf0 : Set undercloud_public_host to the externally facing IP address of the undercloud: Set undercloud_admin_host to the administration IP address of the undercloud. This IP address is usually on leaf0: Set local_interface to the interface to bridge for the local network: Set enable_routed_networks to true : Define your list of subnets using the subnets parameter. Define one subnet for each L2 segment in the routed spine and leaf: Specify the subnet associated with the physical L2 segment local to the undercloud using the local_subnet parameter: Set the value of undercloud_nameservers . Tip You can find the current IP addresses of the DNS servers that are used for the undercloud nameserver by looking in /etc/resolv.conf. Create a new section for each subnet that you define in the subnets parameter: Save the undercloud.conf file. Run the undercloud installation command: This configuration creates three subnets on the provisioning network or control plane. The overcloud uses each network to provision systems within each respective leaf. To ensure proper relay of DHCP requests to the undercloud, you might need to configure a DHCP relay. 2.2. Configuring a DHCP relay You run the DHCP relay service on a switch, router, or server that is connected to the remote network segment you want to forward the requests from. Note Do not run the DHCP relay service on the undercloud. The undercloud uses two DHCP servers on the provisioning network: An introspection DHCP server. A provisioning DHCP server. You must configure the DHCP relay to forward DHCP requests to both DHCP servers on the undercloud. You can use UDP broadcast with devices that support it to relay DHCP requests to the L2 network segment where the undercloud provisioning network is connected. Alternatively, you can use UDP unicast, which relays DHCP requests to specific IP addresses. Note Configuration of DHCP relay on specific device types is beyond the scope of this document. As a reference, this document provides a DHCP relay configuration example using the implementation in ISC DHCP software. For more information, see manual page dhcrelay(8). Important DHCP option 79 is required for some relays, particularly relays that serve DHCPv6 addresses, and relays that do not pass on the originating MAC address. For more information, see RFC6939 . Broadcast DHCP relay This method relays DHCP requests using UDP broadcast traffic onto the L2 network segment where the DHCP server or servers reside. All devices on the network segment receive the broadcast traffic. When using UDP broadcast, both DHCP servers on the undercloud receive the relayed DHCP request. Depending on the implementation, you can configure this by specifying either the interface or IP network address: Interface Specify an interface that is connected to the L2 network segment where the DHCP requests are relayed. IP network address Specify the network address of the IP network where the DHCP requests are relayed. Unicast DHCP relay This method relays DHCP requests using UDP unicast traffic to specific DHCP servers. When you use UDP unicast, you must configure the device that provides the DHCP relay to relay DHCP requests to both the IP address that is assigned to the interface used for introspection on the undercloud and the IP address of the network namespace that the OpenStack Networking (neutron) service creates to host the DHCP service for the ctlplane network. The interface used for introspection is the one defined as inspection_interface in the undercloud.conf file. If you have not set this parameter, the default interface for the undercloud is br-ctlplane . Note It is common to use the br-ctlplane interface for introspection. The IP address that you define as the local_ip in the undercloud.conf file is on the br-ctlplane interface. The IP address allocated to the Neutron DHCP namespace is the first address available in the IP range that you configure for the local_subnet in the undercloud.conf file. The first address in the IP range is the one that you define as dhcp_start in the configuration. For example, 192.168.10.10 is the IP address if you use the following configuration: Warning The IP address for the DHCP namespace is automatically allocated. In most cases, this address is the first address in the IP range. To verify that this is the case, run the following commands on the undercloud: Example dhcrelay configuration In the following examples, the dhcrelay command in the dhcp package uses the following configuration: Interfaces to relay incoming DHCP request: eth1 , eth2 , and eth3 . Interface the undercloud DHCP servers on the network segment are connected to: eth0 . The DHCP server used for introspection is listening on IP address: 192.168.10.1 . The DHCP server used for provisioning is listening on IP address 192.168.10.10 . This results in the following dhcrelay command: dhcrelay version 4.2.x: dhcrelay version 4.3.x and later: Example Cisco IOS routing switch configuration This example uses the following Cisco IOS configuration to perform the following tasks: Configure a VLAN to use for the provisioning network. Add the IP address of the leaf. Forward UDP and BOOTP requests to the introspection DHCP server that listens on IP address: 192.168.10.1 . Forward UDP and BOOTP requests to the provisioning DHCP server that listens on IP address 192.168.10.10 . Now that you have configured the provisioning network, you can configure the remaining overcloud leaf networks. 2.3. Creating flavors and tagging nodes for leaf networks Each role in each leaf network requires a flavor and role assignment so that you can tag nodes into their respective leaf. Complete the following steps to create and assign each flavor to a role. Procedure Source the stackrc file: Create flavors for each custom role: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Retrieve a list of your nodes to identify their UUIDs: Tag each bare metal node to its leaf network and role by using a custom resource class: Replace <node> with the ID of the bare metal node. For example, enter the following command to tag a node with UUID 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13 to the Compute role on Leaf2: Associate each leaf network role flavor with the custom resource class: To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal Provisioning service node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. In the node-info.yaml file, specify the flavor that you want to use for each custom leaf role, and the number of nodes to allocate for each custom leaf role. For example, the following configuration specifies the flavor to use, and the number of nodes to allocate for the custom leaf roles compute_leaf0 , compute_leaf1 , compute_leaf2 , ceph-storage_leaf0 , ceph-storage_leaf1 , and ceph-storage_leaf2 : 2.4. Mapping bare metal node ports to control plane network segments To enable deployment on a L3 routed network, you must configure the physical_network field on the bare metal ports. Each bare metal port is associated with a bare metal node in the OpenStack Bare Metal (ironic) service. The physical network names are the names that you include in the subnets option in the undercloud configuration. Note The physical network name of the subnet specified as local_subnet in the undercloud.conf file is always named ctlplane . Procedure Source the stackrc file: Check the bare metal nodes: Ensure that the bare metal nodes are either in enroll or manageable state. If the bare metal node is not in one of these states, the command that sets the physical_network property on the baremetal port fails. To set all nodes to manageable state, run the following command: Check which baremetal ports are associated with which baremetal node: Set the physical-network parameter for the ports. In the example below, three subnets are defined in the configuration: leaf0 , leaf1 , and leaf2 . The local_subnet is leaf0 . Because the physical network for the local_subnet is always ctlplane , the baremetal port connected to leaf0 uses ctlplane. The remaining ports use the other leaf names: Introspect the nodes before you deploy the overcloud. Include the --all-manageable and --provide options to set the nodes as available for deployment: 2.5. Adding a new leaf to a spine-leaf provisioning network When increasing network capacity which can include adding new physical sites, you might need to add a new leaf and a corresponding subnet to your Red Hat OpenStack Platform spine-leaf provisioning network. When provisioning a leaf on the overcloud, the corresponding undercloud leaf is used. Prerequisites Your RHOSP deployment uses a spine-leaf network topology. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: In the /home/stack/undercloud.conf file, do the following: Locate the subnets parameter, and add a new subnet for the leaf that you are adding. A subnet represents an L2 segment in the routed spine and leaf: Example In this example, a new subnet ( leaf3 ) is added for the new leaf ( leaf3 ): Create a section for the subnet that you added. Example In this example, the section [leaf3] is added for the new subnet ( leaf3 ): Save the undercloud.conf file. Reinstall your undercloud: Additional resources Adding a new leaf to a spine-leaf deployment
|
[
"[stack@director ~]USD cp /usr/share/python-tripleoclient/undercloud.conf.sample ~/undercloud.conf",
"local_ip = 192.168.10.1/24",
"undercloud_public_host = 10.1.1.1",
"undercloud_admin_host = 192.168.10.2",
"local_interface = eth1",
"enable_routed_networks = true",
"subnets = leaf0,leaf1,leaf2",
"local_subnet = leaf0",
"undercloud_nameservers = 10.11.5.19,10.11.5.20",
"[leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False [leaf1] cidr = 192.168.11.0/24 dhcp_start = 192.168.11.10 dhcp_end = 192.168.11.90 inspection_iprange = 192.168.11.100,192.168.11.190 gateway = 192.168.11.1 masquerade = False [leaf2] cidr = 192.168.12.0/24 dhcp_start = 192.168.12.10 dhcp_end = 192.168.12.90 inspection_iprange = 192.168.12.100,192.168.12.190 gateway = 192.168.12.1 masquerade = False",
"[stack@director ~]USD openstack undercloud install",
"[DEFAULT] local_subnet = leaf0 subnets = leaf0,leaf1,leaf2 [leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False",
"openstack port list --device-owner network:dhcp -c \"Fixed IP Addresses\" +----------------------------------------------------------------------------+ | Fixed IP Addresses | +----------------------------------------------------------------------------+ | ip_address='192.168.10.10', subnet_id='7526fbe3-f52a-4b39-a828-ec59f4ed12b2' | +----------------------------------------------------------------------------+ openstack subnet show 7526fbe3-f52a-4b39-a828-ec59f4ed12b2 -c name +-------+--------+ | Field | Value | +-------+--------+ | name | leaf0 | +-------+--------+",
"sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 -i eth0 -i eth1 -i eth2 -i eth3",
"sudo dhcrelay -d --no-pid 192.168.10.10 192.168.10.1 -iu eth0 -id eth1 -id eth2 -id eth3",
"interface vlan 2 ip address 192.168.24.254 255.255.255.0 ip helper-address 192.168.10.1 ip helper-address 192.168.10.10 !",
"[stack@director ~]USD source ~/stackrc",
"ROLES=\"control compute_leaf0 compute_leaf1 compute_leaf2 ceph-storage_leaf0 ceph-storage_leaf1 ceph-storage_leaf2\" for ROLE in USDROLES; do openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> USDROLE ; done for ROLE in USDROLES; do openstack flavor set --property \"cpu_arch\"=\"x86_64\" --property \"capabilities:boot_option\"=\"local\" --property resources:DISK_GB='0' --property resources:MEMORY_MB='0' --property resources:VCPU='0' USDROLE ; done",
"(undercloud)USD openstack baremetal node list",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.LEAF-ROLE <node>",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.COMPUTE-LEAF2 58c3d07e-24f2-48a7-bbb6-6843f0e8ee13",
"(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_LEAF_ROLE=1 <custom_role>",
"parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeLeaf0Flavor: compute_leaf0 OvercloudComputeLeaf1Flavor: compute_leaf1 OvercloudComputeLeaf2Flavor: compute_leaf2 OvercloudCephStorageLeaf0Flavor: ceph-storage_leaf0 OvercloudCephStorageLeaf1Flavor: ceph-storage_leaf1 OvercloudCephStorageLeaf2Flavor: ceph-storage_leaf2 ControllerLeaf0Count: 3 ComputeLeaf0Count: 3 ComputeLeaf1Count: 3 ComputeLeaf2Count: 3 CephStorageLeaf0Count: 3 CephStorageLeaf1Count: 3 CephStorageLeaf2Count: 3",
"source ~/stackrc",
"openstack baremetal node list",
"for node in USD(openstack baremetal node list -f value -c Name); do openstack baremetal node manage USDnode --wait; done",
"openstack baremetal port list --node <node-uuid>",
"openstack baremetal port set --physical-network ctlplane <port-uuid> openstack baremetal port set --physical-network leaf1 <port-uuid> openstack baremetal port set --physical-network leaf2 <port-uuid>",
"openstack overcloud node introspect --all-manageable --provide",
"source ~/stackrc",
"subnets = leaf0,leaf1,leaf2,leaf3",
"[leaf0] cidr = 192.168.10.0/24 dhcp_start = 192.168.10.10 dhcp_end = 192.168.10.90 inspection_iprange = 192.168.10.100,192.168.10.190 gateway = 192.168.10.1 masquerade = False [leaf1] cidr = 192.168.11.0/24 dhcp_start = 192.168.11.10 dhcp_end = 192.168.11.90 inspection_iprange = 192.168.11.100,192.168.11.190 gateway = 192.168.11.1 masquerade = False [leaf2] cidr = 192.168.12.0/24 dhcp_start = 192.168.12.10 dhcp_end = 192.168.12.90 inspection_iprange = 192.168.12.100,192.168.12.190 gateway = 192.168.12.1 masquerade = False [leaf3] cidr = 192.168.13.0/24 dhcp_start = 192.168.13.10 dhcp_end = 192.168.13.90 inspection_iprange = 192.168.13.100,192.168.13.190 gateway = 192.168.13.1 masquerade = False",
"openstack undercloud install"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/spine_leaf_networking/assembly_configuring-routed-spine-leaf-in-the-undercloud
|
8.46. fprintd
|
8.46. fprintd 8.46.1. RHBA-2013:1738 - fprintd bug fix update Updated fprintd packages that fix one bug are now available for Red Hat Enterprise Linux 6. The fprintd packages contain a D-Bus service to access fingerprint readers. Bug Fix BZ# 1003940 When the Pluggable Authentication Module (PAM) configuration includes the pam_fprintd module, PAM uses the glib2 functions where the dlclose() function is executed to unload the glib2 libraries. However, this method is not designed for multi-threaded applications. When a PAM operation was made, Directory Server on Red Hat Enterprise Linux 6 terminated unexpectedly during the shutdown phase because it attempted to unload the glib2 destructor, which had been previously unloaded by the fprintd service. This update applies a patch to fix this bug so that fprintd no longer unloads glib2 when pam_fprintd closes. As a result, the glib2 libraries are unloaded when Directory Server is closed and therefore the server shuts down gracefully. Users of fprintd are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/fprintd
|
Chapter 2. Selecting a cluster installation method and preparing it for users
|
Chapter 2. Selecting a cluster installation method and preparing it for users Before you install OpenShift Container Platform, decide what kind of installation process to follow and make sure you that you have all of the required resources to prepare the cluster for users. 2.1. Selecting a cluster installation type Before you install an OpenShift Container Platform cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option. 2.1.1. Do you want to install and manage an OpenShift Container Platform cluster yourself? If you want to install and manage OpenShift Container Platform yourself, you can install it on the following platforms: Amazon Web Services (AWS) Microsoft Azure Google Cloud Platform (GCP) RHOSP RHV IBM Z and LinuxONE IBM Power VMware vSphere VMware Cloud (VMC) on AWS Bare metal or other platform agnostic infrastructure You can deploy an OpenShift Container Platform 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same datacenter or cloud hosting service. If you want to use OpenShift Container Platform but do not want to manage the cluster yourself, you have several managed service options. If you want a cluster that is fully managed by Red Hat, you can use OpenShift Dedicated or OpenShift Online . You can also use OpenShift as a managed service on Azure, AWS, IBM Cloud, or Google Cloud. For more information about managed services, see the OpenShift Products page. 2.1.2. Have you used OpenShift Container Platform 3 and want to use OpenShift Container Platform 4? If you used OpenShift Container Platform 3 and want to try OpenShift Container Platform 4, you need to understand how different OpenShift Container Platform 4 is. OpenShift Container Platform 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Red Hat Enterprise Linux CoreOS (RHCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OpenShift Container Platform on them, the RHCOS operating system is an integral part of the OpenShift Container Platform cluster. Deploying the operating system for the cluster machines as part of the installation process for OpenShift Container Platform. See Comparing OpenShift Container Platform 3 and OpenShift Container Platform 4 . Because you need to provision machines as part of the OpenShift Container Platform cluster installation process, you cannot upgrade an OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. Instead, you must create a new OpenShift Container Platform 4 cluster and migrate your OpenShift Container Platform 3 workloads to them. For more information about migrating, see OpenShift Migration Best Practices . Because you must migrate to OpenShift Container Platform 4, you can use any type of production cluster installation process to create your new cluster. 2.1.3. Do you want to use existing components in your cluster? Because the operating system is integral to OpenShift Container Platform, it is easier to let the installation program for OpenShift Container Platform stand up all of the infrastructure. These are called installer provisioned infrastructure installations. In this type of installation, you can provide some existing infrastructure to the cluster, but the installation program deploys all of the machines that your cluster initially needs. You can deploy an installer-provisioned infrastructure cluster without specifying any customizations to the cluster or its underlying machines to AWS , Azure , or GCP . These installation methods are the fastest way to deploy a production-capable OpenShift Container Platform cluster. If you need to perform basic configuration for your installer-provisioned infrastructure cluster, such as the instance type for the cluster machines, you can customize an installation for AWS , Azure , or GCP . For installer-provisioned infrastructure installations, you can use an existing VPC in AWS , vNet in Azure , or VPC in GCP . You can also reuse part of your networking infrastructure so that your cluster in AWS , Azure , or GCP can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install OpenShift Container Platform clusters on them. You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for RHV , vSphere , and bare metal . If you want to reuse extensive cloud infrastructure, you can complete a user-provisioned infrastructure installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on AWS , Azure , or GCP , you can use the provided templates to help you stand up all of the required components. Otherwise, you can use the provider-agnostic installation method to deploy a cluster into other clouds. You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use RHOSP , RHV , IBM Z or LinuxONE , IBM Power , or vSphere , use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare metal installation procedure. 2.1.4. Do you need extra security for your cluster? If you use a user-provisioned installation method, you can configure a proxy for your cluster. The instructions are included in each installation procedure. If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on AWS , Azure , or GCP . If you need to install your cluster that has limited access to the Internet, such as a disconnected or restricted network cluster, you can mirror the installation packages and install the cluster from them. Follow detailed instructions for user provisioned infrastructure installations into restricted networks for AWS , GCP , IBM Z or LinuxONE , IBM Power , vSphere , or bare metal . You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS , GCP , RHOSP , RHV , and vSphere . If you need to deploy your cluster to an AWS GovCloud region or Azure government region , you can configure those custom regions during an installer-provisioned infrastructure installation. You can also configure the cluster machines to use FIPS Validated / Modules in Process cryptographic libraries during installation. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 2.2. Preparing your cluster for users after installation Some configuration is not required to install the cluster but recommended before your users access the cluster. You can customize the cluster itself by customizing the Operators that make up your cluster and integrate you cluster with other required systems, such as an identity provider. For a production cluster, you must configure the following integrations: Persistent storage An identity provider Monitoring core OpenShift Container Platform components 2.3. Preparing your cluster for workloads Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application build strategy , you might need to make provisions for low-latency workloads or to protect sensitive workloads . You can also configure monitoring for application workloads. If you plan to run Windows workloads , you must enable hybrid networking with OVN-Kubernetes during the installation process; hybrid networking cannot be enabled after your cluster is installed. 2.4. Supported installation methods for different platforms You can perform different types of installations on different platforms. Note Not all installation options are supported for all platforms, as shown in the following tables. Table 2.1. Installer-provisioned infrastructure options AWS Azure GCP OpenStack RHV Bare metal vSphere VMC IBM Z IBM Power Default X X X X X X X Custom X X X X X X X Network customization X X X X X Restricted network X X X X X X Private clusters X X X Existing virtual private networks X X X Government regions X X Table 2.2. User-provisioned infrastructure options AWS Azure GCP OpenStack RHV Bare metal vSphere VMC IBM Z IBM Power Custom X X X X X X X X X X Network customization X X X Restricted network X X X X X X X Shared VPC hosted outside of cluster project X
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/installing/installing-preparing
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/making-open-source-more-inclusive
|
6.4. Red Hat Virtualization 4.4 SP1 General Availability (ovirt-4.5.0)
|
6.4. Red Hat Virtualization 4.4 SP1 General Availability (ovirt-4.5.0) 6.4.1. Bug Fix These bugs were fixed in this release of Red Hat Virtualization: BZ# 1648985 A user with user role permissions cannot take control of a VM from a superuser, close the superuser's console connection, and assign the VM to a user with user role permissions. BZ# 1687845 Notifications for hosts rely on the server time, instead of comparing the job's "end time" to the local browser time, to resolve the issue of multiple "Finish activating host" notifications. BZ# 1768969 During a self-hosted engine deployment, the TPGT value (target portal group tag) is used for the iSCSI login to resolve the issue of duplicate iSCSI sessions being created. BZ# 1810032 The default value of a vNIC network filter is documented in the REST API documentation. BZ# 1834542 The engine-setup process uses the yum proxy configuration to check for packages and RPMs. BZ# 1932149 The hosted-engine --deploy command checks the compatibility level of the cluster or data center and creates a storage domain in the appropriate format. BZ# 1944290 If a user tries to log in to the VM Portal or the Administration Portal with an expired password, a link directs the user to the "Change password" page. BZ# 1959186 , BZ# 1991240 When a user provisions VMs from templates in the VM Portal, the Manager selects a quota that the user has access to, so that the user is not restricted to the quota specified by the template. BZ# 1971622 The warning icons on the Virtual Machines tab of the host's details view are displayed correctly. BZ# 1971863 The engine-setup process ignores DNS queries with the deprecated type ANY . BZ# 1974741 Previously, a bug in the finalization mechanism left the disk locked in the database. In this release, the finalization mechanism works correctly and the disk remains unlocked in all scenarios. BZ# 1979441 Previously, a warning appeared if the CPU of a high performance VM was different from the cluster CPU. In this release, the warning is not displayed when CPU passthrough is configured. BZ# 1986726 When a VM is imported as an OVA, the selected allocation policy is followed. BZ# 1988496 THe vmconsole-proxy-helper certificate is renewed with the Manager certificate during the engine-setup process. BZ# 2000031 A non-responsive SPM host reboots once instead of multiple times. BZ# 2003996 Previously, a regular snapshot could not be deleted if a " run" snapshot existed because the " run" snapshot type was missing. In this release, the issue is resolved by not reporting " run" snapshots to clients. BZ# 2006745 Previously, when a template disk was copied to/from a Managed Block Storage domain, its storage domain ID was incorrect, the same image was saved repeatedly in the images and base disks database tables, and its ManagedBlockStorageDisk disk type was cast to DiskImage . In this release, copying a template disk to/from a Managed Block Storage domain works as expected. BZ# 2007384 The data type of the disk writeRate and readRate parameter values has been changed from integer to long to support higher values. BZ# 2010067 When a preallocated disk is downloaded, its image is saved as sparse instead of fully allocated. BZ# 2010203 The Log Collection Analysis tool handles line breaks correctly, resolving the issue of incorrect formatting in the "Virtual Machine(s) with unmanaged devices" table of the HTML report. BZ# 2010478 A VM behaves correctly, according to its resume policy, if the storage state changes during VM migration. BZ# 2011309 Previously, a self-hosted engine deployment failed when an OpenSCAP security profile was applied, resulting in the SSH key permissions being changed to 0640 , which is insecure. In this release, the permissions remain 0600 and the deployment succeeds. BZ# 2013928 Special characters in the Log Collection Analysis tool database are escaped, resolving the issue of incorrect formatting in the "vdc_options" table of the HTML report. BZ# 2016173 The LVM filter created by the vdsm-tool filters correctly for a multipath device instead of including SCSI devices. BZ# 2024202 Translation strings in the Administration Portal dialogs are correctly displayed in all languages. BZ# 2028481 SCSI reservation works for hot-plugged disks. BZ# 2040361 When multiple disks with VirtIO-SCSI interfaces are hot-plugged to a virtual machine configured for multiple IO threads, each disk is assigned a unique PCI address, resolving the issue of duplicate PCI addresses. BZ# 2040402 Commands that use the obsolete "log_days" option of the Log Collector tool have been removed. BZ# 2041544 When you select a host to upload, the host list no longer jumps back to the first host if you select a different host. BZ# 2048546 The Log Collector tool has been modified to use the sos report command in order to avoid warning messages caused by the sosreport command, which will be deprecated in the future. BZ# 2050108 The ovirt-ha-broker service runs successfully on a host with a DISA STIG profile. BZ# 2052557 When stateless VMs or VMs that were started in run-once mode are shut down, vGPU devices are properly released. BZ# 2064380 The VNC console password has been changed from 12 to 8 characters, in compliance with libvirt 8 requirements. BZ# 2066811 Self-hosted engine deployment succeeds on a host with a DISA STIG profile, which does not allow non-root users to run Ansible playbooks, when the postgres user is replaced by engine_psql.sh . BZ# 2075852 The correct version of the nodejs package is installed. 6.4.2. Enhancements This release of Red Hat Virtualization features the following enhancements: BZ# 977379 You can edit and manage iSCSI storage domain connections in the Administration Portal. For example, you can edit a logical domain to point to a different physical storage, which is useful if the underlying LUNs are replicated for backup purposes or if the physical storage address has changed. BZ# 1616158 The self-hosted engine installation checks that the IP address of the Manager is in the same subnet as the host running the self-hosted engine agent. BZ# 1624015 You can set a console type globally for all VMs with engine-config . BZ# 1667517 A logged-in user can set the default console type, full screen mode, smart card enablement, Ctrl+Alt+Del key mapping, and the SSH key in the VM Portal. BZ# 1745141 The SnowRidge Accelerator Interface Architecture (AIA) can be enabled by modifying the extra_cpu_flags custom property of a virtual machine ( movdiri , movdir64b ). BZ# 1781241 The ability to connect automatically to a VM in the VM Portal has been restored as a configurable option. BZ# 1849169 The VCPU_TO_PHYSICAL_CPU_RATIO parameter has been added to the Evenly Distributed scheduling policy to prevent over-utilization of physical CPUs on a host. The value of the parameter reflects the ratio between virtual and physical CPUs. BZ# 1878930 You can configure a threshold for the minimum number of available MAC addresses in a pool with engine-config . BZ# 1922977 Shared disks are included in the 'OVF_STORE' configuration, which enables VMs to share disks after a storage domain is moved to a new data center and the VMs are imported. BZ# 1925878 A link to the Administration Portal has been added to all Grafana dashboards. BZ# 1926625 You can enable HTTP Strict Transport Security after installing the Manager by following the instructions in How to enable HTTP Strict Transport Security (HSTS) on Apache HTTPD . BZ# 1944834 You can set a delay interval for shutting down your VM console session in the Administration Portal to avoid accidental disconnection. BZ# 1964208 You can create and download a VM screenshot in the Administration Portal. BZ# 1975720 You can create parallel migration connections. See Parallel migration connections for details. BZ# 1979797 A warning message is displayed if you try to remove a storage domain that contains a volume leased by a VM in a different storage domain. BZ# 1987121 You can specify vGPU driver parameters as a string, for example, enable_uvm=1 , for all the vGPUs of a VM by using the vGPU editing dialog. The vGPU editing dialog has been moved from Host devices to VM devices . BZ# 1990462 RSyslog can authenticate to Elasticsearch with a user name and password. BZ# 1991482 A link to the Monitoring Portal has been added to the Administration Portal dashboard. BZ# 1995455 You can use any number of CPU sockets, up to the number of maximum vCPUs, on cluster versions 4.6 and earlier, if the guest OS is compatible. BZ# 1998255 You can search and filter vNIC profiles by attributes. BZ# 1998866 Windows 11 is supported as a guest operating system. BZ# 1999698 The Apache HTTPD SSLProtocol configuration is managed by crypto-policies instead of being set by engine-setup . BZ# 2012830 You can now use the Logical Volume Management (LVM) devices file for managing storage devices instead of LVM filter, which can be complicated to set up and difficult to manage. Starting with RHEL 8.6, this will be the default for storage device management. BZ# 2002283 You can set the number of PCI Express ports for VMs with engine-config . BZ# 2020620 You can deploy a self-hosted engine on a host with a DISA STIG profile. BZ# 2021217 Windows 2022 is supported as a guest operating system. BZ# 2021545 DataCenter/Cluster compatibility level 4.7 is available for hosts with RHEL 8.6 or later. BZ# 2023786 If a VM is set with the custom property sap_agent=true , hosts that do not have the vdsm-hook-vhostmd package installed are filtered out by the scheduler when the VM is started. BZ# 2029830 You can select either the DISA STIG or the PCI-DSS security profile for the self-hosted engine VM during installation. BZ# 2030596 The Manager can run on a host with a PCI-DSS security profile. BZ# 2033185 Cluster level 4.7 supports the e1000e VM NIC type. Because the e1000 driver is deprecated by RHEL 8.0, users should switch to e1000e if possible. BZ# 2037121 The RHV Image Discrepancy tool displays data center and storage domain names in its output. BZ# 2040474 The Administration Portal provides better error messages and status and progress indicators during cluster upgrade. BZ# 2049782 You can set user-specific preferences in the Administration Portal. BZ# 2054756 A link to the Migration Toolkit for Virtualization documentation has been added to the login screen of the Administration Portal. BZ# 2058177 The nvme-cli package, used by RHEL 8 to manage storage devices, has been added to RHVH. BZ# 2066042 ansible-core package, required by cockpit-ovirt has been added to RHVH. BZ# 2070963 The rng-tools , rsyslog-gnutls , and usbguard packages have been added to rhvm-appliance to comply with DISA-STIG profile requirements. BZ# 2070980 The OVA package manifest has been added to the rhvm-appliance RPM. BZ# 2072881 You can restore a backup of an earlier RHV 4 version to a datacenter/cluster with the current version. 6.4.3. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat Virtualization. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 1782056 IPSec for Open Virtual Network is available for hosts with ovirt-provider-ovn , ovn-2021 or later, and openvswitch2.15 or later. BZ# 1940824 You can upgrade OVN and OVS 2.11 to OVN 2021 and OVS 2.15. BZ# 2004852 You can enable VirtIO-SCSI and multiple queues, depending on the number of available vCPUs, when creating a VM with an Ansible playbook. BZ# 2015796 The current release can be deployed on a host with the RHEL 8.6 DISA STIG OpenSCAP profile. BZ# 2023250 The host installation and upgrade flows have been updated to enable the virt:rhel module during a new installation of the RHEL 8.6 host or upgrade from RHEL 8.5 or earlier. BZ# 2030226 RHVH can be deployed on a machine with the PCI-DSS security profile. BZ# 2052686 The current release requires ansible-core 2.12.0 or later. BZ# 2055136 The virt DNF module version is set to the RHEL version of the host during the upgrade procedure. BZ# 2056126 When an internal certificate is due to expire, the Manager creates a warning event 120 days in advance and an alert event 30 days in advance in the audit log. Custom certificates for HTTPS access to the Manager are not checked. 6.4.4. Deprecated Functionality The items in this section are either no longer supported, or will no longer be supported in a future release. BZ# 2016359 The GlusterFS storage type is deprecated because Red Hat Gluster Storage reaches end of life in 2024. 6.4.5. Removed Functionality BZ# 2052963 The systemtap package has been removed from RHVH. BZ# 2056937 The RHV appliance is no longer supported. You can update the Manager by running dnf update and engine-setup . BZ# 2077545 The ovirt-iso-uploader package has been removed from RHV.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/red_hat_virtualization_4_4_sp1_general_availability_ovirt_4_5_0
|
Chapter 5. Management of Ceph File System volumes, sub-volumes, and sub-volume groups
|
Chapter 5. Management of Ceph File System volumes, sub-volumes, and sub-volume groups As a storage administrator, you can use Red Hat's Ceph Container Storage Interface (CSI) to manage Ceph File System (CephFS) exports. This also allows you to use other services, such as OpenStack's file system service (Manila) by having a common command-line interface to interact with. The volumes module for the Ceph Manager daemon ( ceph-mgr ) implements the ability to export Ceph File Systems (CephFS). The Ceph Manager volumes module implements the following file system export abstractions: CephFS volumes CephFS subvolume groups CephFS subvolumes This chapter describes how to work with: Ceph File System volumes Ceph File System subvolume groups Ceph File System subvolumes 5.1. Ceph File System volumes As a storage administrator, you can create, list, and remove Ceph File System (CephFS) volumes. CephFS volumes are an abstraction for Ceph File Systems. This section describes how to: Create a file system volume. List file system volume. Remove a file system volume. 5.1.1. Creating a file system volume Ceph Manager's orchestrator module creates a Meta Data Server (MDS) for the Ceph File System (CephFS). This section describes how to create CephFS volume. Note This creates the Ceph File System, along with the data and metadata pools. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS volume: Syntax Example 5.1.2. Listing file system volume This section describes the step to list the Ceph File system (CephFS) volumes. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume. Procedure List the CephFS volume: Example 5.1.3. Removing a file system volume Ceph Manager's orchestrator module removes the Meta Data Server (MDS) for the Ceph File System (CephFS). This section shows how to remove the Ceph File system (CephFS) volume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume. Procedure Remove the CephFS volume: Syntax Example 5.2. Ceph File System subvolumes As a storage administrator, you can create, list, fetch absolute path, fetch metadata, and remove Ceph File System (CephFS) subvolumes. You can also authorize Ceph client users for CephFS subvolumes. Additionally, you can also create, list and remove snapshots of these subvolumes. CephFS subvolumes are an abstraction for independent Ceph File Systems directory trees. This section describes how to: Create a file system subvolume. List file system subvolume. Authorizing Ceph client users for File System subvolumes. Deauthorizing Ceph client users for File System subvolumes. Listing Ceph client users for File System subvolumes. Evicting Ceph client users from File System subvolumes. Resizing a file system subvolume. Fetch absolute path of a file system subvolume. Fetch metadata of a file system subvolume. Create snapshot of a file system subvolume. List snapshots of a file system subvolume. Fetching metadata of the snapshots of a file system subvolume. Remove a file system subvolume. Remove snapshot of a file system subvolume. 5.2.1. Creating a file system subvolume This section describes how to create Ceph File system (CephFS) subvolume. Note When creating a subvolume you can specify its subvolume group, data pool layout, uid, gid, file mode in octal numerals, and size in bytes. The subvolume can be created in a separate RADOS namespace by specifying`--namespace-isolated` option. By default a subvolume is created within the default subvolume group, and with an octal file mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory and no size limit. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS subvolume: Syntax Example The command succeeds even if the subvolume already exists. 5.2.2. Listing file system subvolume This section describes the step to list the Ceph File system (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure List the CephFS subvolume: Syntax Example 5.2.3. Authorizing Ceph client users for File System subvolumes Red Hat Ceph Storage cluster uses cephx for authentication, which is enabled by default. To use cephx with the Ceph File System (CephFS) subvolumes, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System is mounted. You can authorize the user to access the CephFS subvolumes using the authorize command. Prerequisites A working Red Hat Ceph Storage cluster with CephFS deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume created. Procedure Create a CephFS subvolume: Syntax Example The command succeeds even if the subvolume already exists. Authorize the Ceph client user,with either read or write access to CephFS subvolumes: Syntax The ACCESS_LEVEL can be either r or rw and AUTH_ID is the Ceph client user, which is a string. Example In this example, the 'client.guest' is authorized to access subvolume sub0 in the subvolume group subgroup0 . Additional Resources See the Ceph authentication configuration section in the Red Hat Ceph Storage Configuration Guide . See the Creating a file system volume section in the Red Hat Ceph Storage Ceph File System Guide . 5.2.4. Deauthorizing Ceph client users for File System subvolumes You can deauthorize the user to access the Ceph File System (CephFS) subvolumes using the deauthorize command. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume and subvolume created. Ceph client users authorized to access CephFS subvolumes. Procedure Deauthorize the Ceph client user's access to CephFS subvolumes: Syntax The AUTH_ID is the Ceph client user, which is a string. Example In this example, the 'client.guest' is deauthorized to access subvolume sub0 in the subvolume group subgroup0 . Additional Resources See the Authorizing Ceph client users for File System subvolumes section in the Red Hat Ceph Storage Ceph File System Guide . 5.2.5. Listing Ceph client users for File System subvolumes You can list the user's access to the Ceph File System (CephFS) subvolumes using the authorized_list command. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume and subvolume created. Ceph client users authorized to access CephFS subvolumes. Procedure List the Ceph client user's access to CephFS subvolumes: Syntax Example Additional Resources See the Authorizing Ceph client users for File System subvolumes section in the Red Hat Ceph Storage Ceph File System Guide . 5.2.6. Evicting Ceph client users from File System subvolumes You can evict the Ceph client user from the Ceph File System (CephFS) subvolumes using the evict command based on the _AUTH_ID and the subvolume mounted. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume and subvolume created. Ceph client users authorized to access CephFS subvolumes. Procedure Evict the Ceph client user from the CephFS subvolumes: Syntax The AUTH_ID is the Ceph client user, which is a string. Example In this example, the 'client.guest' is evicted from the subvolumegroup subgroup0 . Additional Resources See the Authorizing Ceph client users for File System subvolumes section in the Red Hat Ceph Storage Ceph File System Guide . 5.2.7. Resizing a file system subvolume This section describes the step to resize the Ceph File system (CephFS) subvolume. Note The ceph fs subvolume resize command resizes the subvolume quota using the size specified by new_size . The --no_shrink flag prevents the subvolume to shrink below the current used size of the subvolume. The subvolume can be resized to an infinite size by passing inf or infinite as the new_size . Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Resize a CephFS subvolume: Syntax Example 5.2.8. Fetching absolute path of a file system subvolume This section shows how to fetch the absolute path of a Ceph File system (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Fetch the absolute path of the CephFS subvolume: Syntax Example 5.2.9. Fetching metadata of a file system subvolume This section shows how to fetch metadata of a Ceph File system (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Fetch the metadata of a CephFS subvolume: Syntax Example Example output The output format is a json and contains the following fields: atime : access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". mtime : modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". ctime : change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". uid : uid of subvolume path. gid : gid of subvolume path. mode : mode of subvolume path. mon_addrs : list of monitor addresses. bytes_pcent : quota used in percentage if quota is set, else displays "undefined". bytes_quota : quota size in bytes if quota is set, else displays "infinite". bytes_used : current used size of the subvolume in bytes. created_at : time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS". data_pool : data pool the subvolume belongs to. path : absolute path of a subvolume. type : subvolume type indicating whether it's clone or subvolume. pool_namespace : RADOS namespace of the subvolume. features : features supported by the subvolume, such as , "snapshot-clone", "snapshot-autoprotect", or "snapshot-retention". state : current state of the subvolume, such as, "complete" or "snapshot-retained" 5.2.10. Creating snapshot of a file system subvolume This section shows how to create snapshots of Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. In addition to read ( r ) and write ( w ) capabilities, clients also require s flag on a directory path within the file system. Procedure Verify that the s flag is set on the directory: Syntax Example 1 2 In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a . Create a snapshot of the Ceph File System subvolume: Syntax Example 5.2.11. Cloning subvolumes from snapshots Subvolumes can be created by cloning subvolume snapshots. It is an asynchronous operation involving copying data from a snapshot to a subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. To create or delete snapshots, in addition to read and write capability, clients require s flag on a directory path within the filesystem. Syntax In the following example, client.0 can create or delete snapshots in the bar directory of filesystem cephfs_a . Example Procedure Create a Ceph File System (CephFS) volume: Syntax Example This creates the CephFS file system, its data and metadata pools. Create a subvolume group. By default, the subvolume group is created with an octal file mode '755', and data pool layout of its parent directory. Syntax Example Create a subvolume. By default, a subvolume is created within the default subvolume group, and with an octal file mode '755', uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory and no size limit. Syntax Example Create a snapshot of a subvolume: Syntax Example Initiate a clone operation: Note By default, cloned subvolumes are created in the default group. If the source subvolume and the target clone are in the default group, run the following command: Syntax Example If the source subvolume is in the non-default group, then specify the source subvolume group in the following command: Syntax Example If the target clone is to a non-default group, then specify the target group in the following command: Syntax Example Check the status of the clone operation: Syntax Example Additional Resources See the Managing Ceph Users section in the Red Hat Ceph Storage Administration Guide . 5.2.12. Listing snapshots of a file system subvolume This section provides the step to list the snapshots of a Ceph File system (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Snapshots of the subvolume. Procedure List the snapshots of a CephFS subvolume: Syntax Example 5.2.13. Fetching metadata of the snapshots of a file system subvolume This section provides the step to fetch the metadata of the snapshots of a Ceph File system (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with CephFS deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Snapshots of the subvolume. Procedure Fetch the metadata of the snapshots of a CephFS subvolume: Syntax Example Example output The output format is json and contains the following fields: created_at : time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff". data_pool : data pool the snapshot belongs to. has_pending_clones : "yes" if snapshot clone is in progress otherwise "no". size : snapshot size in bytes. 5.2.14. Removing a file system subvolume This section describes the step to remove the Ceph File system (CephFS) subvolume. Note The ceph fs subvolume rm command removes the subvolume and its contents in two steps. First, it moves the subvolume to a trash folder, and then asynchronously purges its contents. A subvolume can be removed retaining existing snapshots of the subvolume using the --retain-snapshots option. If snapshots are retained, the subvolume is considered empty for all operations not involving the retained snapshots. Retained snapshots can be used as a clone source to recreate the subvolume, or cloned to a newer subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Remove a CephFS subvolume: Syntax Example To recreate a subvolume from a retained snapshot: Syntax * NEW_SUBVOLUME - can either be the same subvolume which was deleted earlier or clone it to a new subvolume. Example 5.2.15. Removing snapshot of a file system subvolume This section provides the step to remove snapshots of a Ceph File system (CephFS) subvolume group. Note Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A Ceph File System volume. A snapshot of the subvolume group. Procedure Remove the snapshot of the CephFS subvolume: Syntax Example 5.3. Ceph File System subvolume groups As a storage administrator, you can create, list, fetch absolute path, and remove Ceph File System (CephFS) subvolume groups. Additionally, you can also create, list and remove snapshots of these subvolume groups. CephFS subvolume groups are abstractions at a directory level which effects policies, for example, file layouts, across a set of subvolumes. This section describes how to: Create a file system subvolume group. List file system subvolume groups. Fetch absolute path of a file system subvolume group. Create snapshot of a file system subvolume group. List snapshots of a file system subvolume group. Remove snapshot of a file system subvolume group. Remove a file system subvolume group. 5.3.1. Creating a file system subvolume group This section describes how to create Ceph File system (CephFS) subvolume group. Note When creating a subvolume group you can specify its data pool layout, uid, gid, and file mode in octal numerals. By default, the subvolume group is created with an octal file mode '755', uid '0', gid '0' and data pool layout of its parent directory. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS subvolume group: Syntax Example The command succeeds even if the subvolume group already exists. 5.3.2. Listing file system subvolume groups This section describes the step to list the Ceph File system (CephFS) subvolume groups. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure List the CephFS subvolume groups: Syntax Example 5.3.3. Fetching absolute path of a file system subvolume group This section shows how to fetch the absolute path of a Ceph File system (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure Fetch the absolute path of the CephFS subvolume group: Syntax Example 5.3.4. Creating snapshot of a file system subvolume group This section shows how to create snapshots of Ceph File system (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. CephFS subvolume group. In addition to read ( r ) and write ( w ) capabilities, clients also require s flag on a directory path within the file system. Procedure Verify that the s flag is set on the directory: Syntax Example 1 2 In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a . Create a snapshot of the CephFS subvolume group: Syntax Example The command implicitly snapshots all the subvolumes under the subvolume group. 5.3.5. Listing snapshots of a file system subvolume group This section provides the steps to list the snapshots of a Ceph File system (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Snapshots of the subvolume group. Procedure List the snapshots of a CephFS subvolume group: Syntax Example 5.3.6. Removing snapshot of a file system subvolume group This section provides the step to remove snapshots of a Ceph File system (CephFS) subvolume group. Note Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A Ceph File System volume. A snapshot of the subvolume group. Procedure Remove the snapshot of the CephFS subvolume group: Syntax Example 5.3.7. Removing a file system subvolume group This section shows how to remove the Ceph File system (CephFS) subvolume group. Note The removal of a subvolume group fails if it is not empty or non-existent. The --force flag allows the non-existent subvolume group to be removed. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure Remove the CephFS subvolume group: Syntax Example 5.4. Additional Resources See the Managing Ceph Users section in the Red Hat Ceph Storage Administration Guide .
|
[
"ceph fs volume create VOLUME_NAME",
"ceph fs volume create cephfs",
"ceph fs volume ls",
"ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]",
"ceph fs volume rm cephfs --yes-i-really-mean-it",
"ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ] [--namespace-isolated]",
"ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated",
"ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume ls cephfs --group_name subgroup0",
"ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ] [--namespace-isolated]",
"ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated",
"ceph fs subvolume authorize VOLUME_NAME SUBVOLUME_NAME AUTH_ID [--group_name= GROUP_NAME ] [--access_level= ACCESS_LEVEL ]",
"ceph fs subvolume authorize cephfs sub0 guest --group_name=subgroup0 --access_level=rw",
"ceph fs subvolume deauthorize VOLUME_NAME SUBVOLUME_NAME AUTH_ID [--group_name= GROUP_NAME ]",
"ceph fs subvolume deauthorize cephfs sub0 guest --group_name=subgroup0",
"ceph fs subvolume authorized_list VOLUME_NAME SUBVOLUME_NAME [--group_name= GROUP_NAME ]",
"ceph fs subvolume authorized_list cephfs sub0 --group_name=subgroup0 [ { \"guest\": \"rw\" } ]",
"ceph fs subvolume evict VOLUME_NAME SUBVOLUME_NAME AUTH_ID [--group_name= GROUP_NAME ]",
"ceph fs subvolume evict cephfs sub0 guest --group_name=subgroup0",
"ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME_ NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME ] [--no_shrink]",
"ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink",
"ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume getpath cephfs sub0 --group_name subgroup0 /volumes/subgroup0/sub0/c10cc8b8-851d-477f-99f2-1139d944f691",
"ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume info cephfs sub0 --group_name subgroup0",
"{ \"atime\": \"2020-09-08 09:27:15\", \"bytes_pcent\": \"undefined\", \"bytes_quota\": \"infinite\", \"bytes_used\": 0, \"created_at\": \"2020-09-08 09:27:15\", \"ctime\": \"2020-09-08 09:27:15\", \"data_pool\": \"cephfs_data\", \"features\": [ \"snapshot-clone\", \"snapshot-autoprotect\", \"snapshot-retention\" ], \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"10.8.128.22:6789\", \"10.8.128.23:6789\", \"10.8.128.24:6789\" ], \"mtime\": \"2020-09-08 09:27:15\", \"path\": \"/volumes/subgroup0/sub0/6d01a68a-e981-4ebe-84ca-96b660879173\", \"pool_namespace\": \"\", \"state\": \"complete\", \"type\": \"subvolume\", \"uid\": 0 }",
"ceph auth get CLIENT_NAME",
"client.0 key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/bar 1 caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a 2",
"ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME _SNAP_NAME [--group_name GROUP_NAME ]",
"ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0",
"CLIENT_NAME key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path= DIRECTORY_PATH caps: [mon] allow r caps: [osd] allow rw tag cephfs data= DIRECTORY_NAME",
"client.0 key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/bar caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a",
"ceph fs volume create VOLUME_NAME",
"ceph fs volume create cephfs",
"ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolumegroup create cephfs subgroup0",
"ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolume create cephfs sub0 --group_name subgroup0",
"ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_SUBVOLUME_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_SUBVOLUME_NAME --group_name SUBVOLUME_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_SUBVOLUME_NAME --target_group_name _SUBVOLUME_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1",
"ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME ]",
"ceph fs clone status cephfs clone0 --group_name subgroup1 { \"status\": { \"state\": \"complete\" } }",
"ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0",
"ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0",
"{ \"created_at\": \"2021-09-08 06:18:47.330682\", \"data_pool\": \"cephfs_data\", \"has_pending_clones\": \"no\", \"size\": 0 }",
"ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ] [--force] [--retain-snapshots]",
"ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain snapshots",
"ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0",
"ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME _SNAP_NAME [--group_name GROUP_NAME --force]",
"ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force",
"ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolumegroup create cephfs subgroup0",
"ceph fs subvolumegroup ls VOLUME_NAME",
"ceph fs subvolumegroup ls cephfs",
"ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME",
"ceph fs subvolumegroup getpath cephfs subgroup0 /volumes/subgroup0",
"ceph auth get CLIENT_NAME",
"client.0 key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/bar 1 caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a 2",
"ceph fs subvolumegroup snapshot create VOLUME_NAME _GROUP_NAME SNAP_NAME",
"ceph fs subvolumegroup snapshot create cephfs subgroup0 snap0",
"ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME",
"ceph fs subvolumegroup snapshot ls cephfs subgroup0",
"ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]",
"ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force",
"ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]",
"ceph fs subvolumegroup rm cephfs subgroup0 --force"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/management-of-ceph-file-system-volumes-subvolumes-and-subvolume-groups
|
Chapter 8. Providing public access to an instance
|
Chapter 8. Providing public access to an instance New instances automatically receive a port with a fixed IP address on the network that the instance is assigned to. This IP address is private and is permanently associated with the instance until the instance is deleted. The fixed IP address is used for communication between instances. You can connect a public instance directly to a shared external network where a public IP address is directly assigned to the instance. This is useful if you are working in a private cloud. You can also provide public access to an instance through a project network that has a routed connection to an external provider network. This is the preferred method if you are working in a public cloud, or when public IP addresses are limited. To provide public access through the project network, the project network must be connected to a router with the gateway set to the external network. For external traffic to reach the instance, the cloud user must associate a floating IP address with the instance. To provide access to and from an instance, whether it is connected to a shared external network or a routed provider network, you must configure security group rules for the required protocols, such as SSH, ICMP, or HTTP. You must also pass a key pair to the instance during creation, so that you can access the instance remotely. 8.1. Prerequisites The external network must have a subnet to provide the floating IP addresses. The project network must be connected to a router that has the external network configured as the gateway. 8.2. Securing instance access with security groups and key pairs Security groups are sets of IP filter rules that control network and protocol access to and from instances, such as ICMP to allow you to ping an instance, and SSH to allow you to connect to an instance. The security group rules are applied to all instances within a project. All projects have a default security group called default , which is used when you do not specify a security group for your instances. By default, the default security group allows all outgoing traffic and denies all incoming traffic from any source other than instances in the same security group. You can either add rules to the default security group or create a new security group for your project. You can apply one or more security groups to an instance during instance creation. To apply a security group to a running instance, apply the security group to a port attached to the instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Key pairs are SSH or x509 credentials that are injected into an instance when it is launched to enable remote access to the instance. You can create new key pairs in RHOSP, or import existing key pairs. Each user should have at least one key pair. The key pair can be used for multiple instances. Note You cannot share key pairs between users in a project because each key pair belongs to the individual user that created or imported the key pair, rather than to the project. 8.2.1. Creating a security group You can create a new security group to apply to instances and ports within a project. Procedure Optional: To ensure the security group you need does not already exist, review the available security groups and their rules: Replace <sec_group> with the name or ID of the security group that you retrieved from the list of available security groups. Create your security group: Add rules to your security group: Replace <protocol> with the name of the protocol you want to allow to communicate with your instances. Optional: Replace <port-range> with the destination port or port range to open for the protocol. Required for IP protocols TCP, UDP, and SCTP. Set to -1 to allow all ports for the specified protocol. Optional: You can allow access only from specified IP addresses by using --remote-ip to specify the remote IP address block, or --remote-group to specify that the rule only applies to packets from interfaces that are a member of the remote group. If using --remote-ip , replace <ip-address> with the remote IP address block. You can use CIDR notation. If using --remote-group , replace <group> with the name or ID of the existing security group. If neither option is specified, then access is allowed to all addresses, as the remote IP access range defaults (IPv4 default: 0.0.0.0/0 ; IPv6 default: ::/0 ). Specify the direction of network traffic the protocol rule applies to, either incoming ( ingress ) or outgoing ( egress ). If not specified, defaults to ingress . Repeat step 3 until you have created rules for all the protocols that you want to allow to access your instances. The following example creates a rule to allow SSH connections to instances in the security group mySecGroup : 8.2.2. Updating security group rules You can update the rules of any security group that you have access to. Procedure Retrieve the name or ID of the security group that you want to update the rules for: Determine the rules that you need to apply to the security group. Add rules to your security group: Replace <protocol> with the name of the protocol you want to allow to communicate with your instances. Optional: Replace <port-range> with the destination port or port range to open for the protocol. Required for IP protocols TCP, UDP, and SCTP. Set to -1 to allow all ports for the specified protocol. Optional: You can allow access only from specified IP addresses by using --remote-ip to specify the remote IP address block, or --remote-group to specify that the rule only applies to packets from interfaces that are a member of the remote group. If using --remote-ip , replace <ip-address> with the remote IP address block. You can use CIDR notation. If using --remote-group , replace <group> with the name or ID of the existing security group. If neither option is specified, then access is allowed to all addresses, as the remote IP access range defaults (IPv4 default: 0.0.0.0/0 ; IPv6 default: ::/0 ). Specify the direction of network traffic the protocol rule applies to, either incoming ( ingress ) or outgoing ( egress ). If not specified, defaults to ingress . Replace <group_name> with the name or ID of the security group that you want to apply the rule to. Repeat step 3 until you have created rules for all the protocols that you want to allow to access your instances. The following example creates a rule to allow SSH connections to instances in the security group mySecGroup : 8.2.3. Deleting security group rules You can delete rules from a security group. Procedure Identify the security group that the rules are applied to: Retrieve IDs of the rules associated with the security group: Delete the rule or rules: Replace <rule> with the ID of the rule to delete. You can delete more than one rule at a time by specifying a space-delimited list of the IDs of the rules to delete. 8.2.4. Adding a security group to a port The default security group is applied to instances that do not specify an alternative security group. You can apply an alternative security group to a port on a running instance. Procedure Determine the port on the instance that you want to apply the security group to: Apply the security group to the port: Replace <sec_group> with the name or ID of the security group you want to apply to the port on your running instance. You can use the --security-group option more than once to apply multiple security groups, as required. 8.2.5. Removing a security group from a port To remove a security group from a port you need to first remove all the security groups, then re-add the security groups that you want to remain assigned to the port. Procedure List all the security groups associated with the port and record the IDs of the security groups that you want to remain associated with the port: Remove all the security groups associated with the port: Re-apply the security groups to the port: Replace <sec_group> with the ID of the security group that you want to re-apply to the port on your running instance. You can use the --security-group option more than once to apply multiple security groups, as required. 8.2.6. Deleting a security group You can delete security groups that are not associated with any ports. Procedure Retrieve the name or ID of the security group that you want to delete: Retrieve a list of the available ports: Check each port for an associated security group: If the security group you want to delete is associated with any of the ports, then you must first remove the security group from the port. For more information, see Removing a security group from a port . Delete the security group: Replace <group> with the ID of the group that you want to delete. You can delete more than one group at a time by specifying a space-delimited list of the IDs of the groups to delete. 8.2.7. Generating a new SSH key pair You can create a new SSH key pair for use within your project. Note Use a x509 certificate to create a key pair for a Windows instance. Procedure Create the key pair and save the private key in your local .ssh directory: Replace <keypair> with the name of your new key pair. Protect the private key: 8.2.8. Importing an existing SSH key pair You can import an SSH key to your project that you created outside of the Red Hat OpenStack Platform (RHOSP) by providing the public key file when you create a new key pair. Procedure Create the key pair from the existing key file and save the private key in your local .ssh directory: To import the key pair from an existing public key file, enter the following command: Replace <public_key> with the name of the public key file that you want to use to create the key pair. Replace <keypair> with the name of your new key pair. To import the key pair from an existing private key file, enter the following command: Replace <private_key> with the name of the public key file that you want to use to create the key pair. Replace <keypair> with the name of your new key pair. Protect the private key: 8.2.9. Additional resources Security groups in the Networking Guide . Project security management in the Users and Identity Management Guide . 8.3. Assigning a floating IP address to an instance You can assign a public floating IP address to an instance to enable communication with networks outside the cloud, including the Internet. The cloud administrator configures the available pool of floating IP addresses for an external network. You can allocate a floating IP address from this pool to your project, then associate the floating IP address with your instance. Projects have a limited quota of floating IP addresses that can be used by instances in the project, 50 by default. Therefore, release IP addresses for reuse when you no longer need them. Prerequisites The instance must be on an external network, or on a project network that is connected to a router that has the external network configured as the gateway. The external network that the instance will connect to must have a subnet to provide the floating IP addresses. Procedure Check the floating IP addresses that are allocated to the current project: If there are no floating IP addresses available that you want to use, allocate a floating IP address to the current project from the external network allocation pool: Replace <provider-network> with the name or ID of the external network that you want to use to provide external access. Tip By default, a floating IP address is randomly allocated from the pool of the external network. A cloud administrator can use the --floating-ip-address option to allocate a specific floating IP address from an external network. Assign the floating IP address to an instance: Replace <instance> with the name or ID of the instance that you want to provide public access to. Replace <floating_ip> with the floating IP address that you want to assign to the instance. Optional: Replace <ip_address> with the IP address of the interface that you want to attach the floating IP to. By default, this attaches the floating IP address to the first port. Verify that the floating IP address has been assigned to the instance: Additional resources Creating floating IP pools in the Networking Guide . 8.4. Disassociating a floating IP address from an instance When the instance no longer needs public access, disassociate it from the instance and return it to the allocation pool. Procedure Disassociate the floating IP address from the instance: Replace <instance> with the name or ID of the instance that you want to remove public access from. Replace <floating_ip> with the floating IP address that is assigned to the instance. Release the floating IP address back into the allocation pool: Confirm the floating IP address is deleted and is no longer available for assignment: 8.5. Creating an instance with SSH access You can provide SSH access to an instance by specifying a key pair when you create the instance. Key pairs are SSH or x509 credentials that are injected into an instance when it is launched. Each project should have at least one key pair. A key pair belongs to an individual user, not to a project. Note You cannot associate a key pair with an instance after the instance has been created. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Prerequisites A key pair is available that you can use to SSH into your instances. For more information, see Generating a new SSH key pair . The network that you plan to create your instance on must be an external network, or a project network connected to a router that has the external network configured as the gateway. For more information, see Adding a router in the Networking Guide . The external network that the instance connects to must have a subnet to provide the floating IP addresses. The security group allows SSH access to instances. For more information, see Securing instance access with security groups and key pairs . The image that the instance is based on contains the cloud-init package to inject the SSH public key into the instance. A floating IP address is available to assign to your instance. For more information, see Assigning a floating IP address to an instance . Procedure Retrieve the name or ID of the flavor that has the hardware profile that your instance requires: Note Choose a flavor with sufficient size for the image to successfully boot, otherwise the instance will fail to launch. Retrieve the name or ID of the image that has the software profile that your instance requires: If the image you require is not available, you can download or create a new image. For information about creating or downloading cloud images, see Image service . Retrieve the name or ID of the network that you want to connect your instance to: Retrieve the name of the key pair that you want to use to access your instance remotely: Create your instance with SSH access: Replace <flavor> with the name or ID of the flavor that you retrieved in step 1. Replace <image> with the name or ID of the image that you retrieved in step 2. Replace <network> with the name or ID of the network that you retrieved in step 3. You can use the --network option more than once to connect your instance to several networks, as required. Optional: The default security group is applied to instances that do not specify an alternative security group. You can apply an alternative security group directly to the instance during instance creation, or to a port on the running instance. Use the --security-group option to specify an alternative security group when creating the instance. For information on adding a security group to a port on a running instance, see Adding a security group to a port . Replace <keypair> with the name or ID of the key pair that you retrieved in step 4. Assign a floating IP address to the instance: Replace <floating_ip> with the floating IP address that you want to assign to the instance. Use the automatically created cloud-user account to verify that you can log in to your instance by using SSH: 8.6. Additional resources Creating a network in the Networking Guide . Adding a router in the Networking Guide .
|
[
"openstack security group list openstack security group rule list <sec_group>",
"openstack security group create mySecGroup",
"openstack security group rule create --protocol <protocol> [--dst-port <port-range>] [--remote-ip <ip-address> | --remote-group <group>] [--ingress | --egress] mySecGroup",
"openstack security group rule create --protocol tcp --dst-port 22 mySecGroup",
"openstack security group list",
"openstack security group rule create --protocol <protocol> [--dst-port <port-range>] [--remote-ip <ip-address> | --remote-group <group>] [--ingress | --egress] <group_name>",
"openstack security group rule create --protocol tcp --dst-port 22 mySecGroup",
"openstack security group list",
"openstack security group show <sec-group>",
"openstack security group rule delete <rule> [<rule> ...]",
"openstack port list --server myInstancewithSSH",
"openstack port set --security-group <sec_group> <port>",
"openstack port show <port>",
"openstack port set --no-security-group <port>",
"openstack port set --security-group <sec_group> <port>",
"openstack security group list",
"openstack port list",
"openstack port show <port-uuid> -c security_group_ids",
"openstack security group delete <group> [<group> ...]",
"openstack keypair create <keypair> > ~/.ssh/<keypair>.pem",
"chmod 600 ~/.ssh/<keypair>.pem",
"openstack keypair create --public-key ~/.ssh/<public_key>.pub <keypair> > ~/.ssh/<keypair>.pem",
"openstack keypair create --private-key ~/.ssh/<private_key> <keypair> > ~/.ssh/<keypair>.pem",
"chmod 600 ~/.ssh/<keypair>.pem",
"openstack floating ip list",
"openstack floating ip create <provider-network>",
"openstack server add floating ip [--fixed-ip-address <ip_address>] <instance> <floating_ip>",
"openstack server show <instance>",
"openstack server remove floating ip <instance> <ip_address>",
"openstack floating ip delete <ip_address>",
"openstack floating ip list",
"openstack flavor list",
"openstack image list",
"openstack network list",
"openstack keypair list",
"openstack server create --flavor <flavor> --image <image> --network <network> [--security-group <secgroup>] --key-name <keypair> --wait myInstancewithSSH",
"openstack server add floating ip myInstancewithSSH <floating_ip>",
"ssh -i ~/.ssh/<keypair>.pem cloud-user@<floatingIP> [cloud-user@demo-server1 ~]USD"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_instances/assembly_providing-public-access-to-an-instance_instances
|
Chapter 1. Using Tekton Chains for OpenShift Pipelines supply chain security
|
Chapter 1. Using Tekton Chains for OpenShift Pipelines supply chain security Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines. By default, Tekton Chains observes all task run executions in your OpenShift Container Platform cluster. When the task runs complete, Tekton Chains takes a snapshot of the task runs. It then converts the snapshot to one or more standard payload formats, and finally signs and stores all artifacts. To capture information about task runs, Tekton Chains uses Result objects. When the objects are unavailable, Tekton Chains the URLs and qualified digests of the OCI images. 1.1. Key features You can sign task runs, task run results, and OCI registry images with cryptographic keys that are generated by tools such as cosign and skopeo . You can use attestation formats such as in-toto . You can securely store signatures and signed artifacts using OCI repository as a storage backend. 1.2. Configuring Tekton Chains The Red Hat OpenShift Pipelines Operator installs Tekton Chains by default. You can configure Tekton Chains by modifying the TektonConfig custom resource; the Operator automatically applies the changes that you make in this custom resource. To edit the custom resource, use the following command: USD oc edit TektonConfig config The custom resource includes a chain: array. You can add any supported configuration parameters to this array, as shown in the following example: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: addon: {} chain: artifacts.taskrun.format: tekton config: {} 1.2.1. Supported parameters for Tekton Chains configuration Cluster administrators can use various supported parameter keys and values to configure specifications about task runs, OCI images, and storage. 1.2.1.1. Supported parameters for task run artifacts Table 1.1. Chains configuration: Supported parameters for task run artifacts Key Description Supported values Default value artifacts.taskrun.format The format for storing task run payloads. in-toto , slsa/v1 in-toto artifacts.taskrun.storage The storage backend for task run signatures. You can specify multiple backends as a comma-separated list, such as "tekton,oci" . To disable storing task run artifacts, provide an empty string "" . tekton , oci , gcs , docdb , grafeas tekton artifacts.taskrun.signer The signature backend for signing task run payloads. x509 , kms x509 Note slsa/v1 is an alias of in-toto for backwards compatibility. 1.2.1.2. Supported parameters for pipeline run artifacts Table 1.2. Chains configuration: Supported parameters for pipeline run artifacts Parameter Description Supported values Default value artifacts.pipelinerun.format The format for storing pipeline run payloads. in-toto , slsa/v1 in-toto artifacts.pipelinerun.storage The storage backend for storing pipeline run signatures. You can specify multiple backends as a comma-separated list, such as "tekton,oci" . To disable storing pipeline run artifacts, provide an empty string "" . tekton , oci , gcs , docdb , grafeas tekton artifacts.pipelinerun.signer The signature backend for signing pipeline run payloads. x509 , kms x509 artifacts.pipelinerun.enable-deep-inspection When this parameter is true , Tekton Chains records the results of the child task runs of a pipeline run. When this parameter is false , Tekton Chains records the results of the pipeline run, but not of its child task runs. "true", "false" "false" Note slsa/v1 is an alias of in-toto for backwards compatibility. For the grafeas storage backend, only Container Analysis is supported. You can not configure the grafeas server address in the current version of Tekton Chains. 1.2.1.3. Supported parameters for OCI artifacts Table 1.3. Chains configuration: Supported parameters for OCI artifacts Parameter Description Supported values Default value artifacts.oci.format The format for storing OCI payloads. simplesigning simplesigning artifacts.oci.storage The storage backend for storing OCI signatures. You can specify multiple backends as a comma-separated list, such as "oci,tekton" . To disable storing OCI artifacts, provide an empty string "" . tekton , oci , gcs , docdb , grafeas oci artifacts.oci.signer The signature backend for signing OCI payloads. x509 , kms x509 1.2.1.4. Supported parameters for KMS signers Table 1.4. Chains configuration: Supported parameters for KMS signers Parameter Description Supported values Default value signers.kms.kmsref The URI reference to a KMS service to use in kms signers. Supported schemes: gcpkms:// , awskms:// , azurekms:// , hashivault:// . See Providers in the Sigstore documentation for more details. 1.2.1.5. Supported parameters for storage Table 1.5. Chains configuration: Supported parameters for storage Parameter Description Supported values Default value storage.gcs.bucket The GCS bucket for storage storage.oci.repository The OCI repository for storing OCI signatures and attestation. If you configure one of the artifact storage backends to oci and do not define this key, Tekton Chains stores the attestation alongside the stored OCI artifact itself. If you define this key, the attestation is not stored alongside the OCI artifact and is instead stored in the designated location. See the cosign documentation for additional information. builder.id The builder ID to set for in-toto attestations https://tekton.dev/chains/v2 builddefinition.buildtype The build type for in-toto attestation. When this parameter is https://tekton.dev/chains/v2/slsa , Tekton Chains records in-toto attestations in strict conformance with the SLSA v1.0 specification. When this parameter is https://tekton.dev/chains/v2/slsa-tekton , Tekton Chains records in-toto attestations with additional information, such as the labels and annotations in each TaskRun and PipelineRun object, and also adds each task in a PipelineRun object under resolvedDependencies . https://tekton.dev/chains/v2/slsa , https://tekton.dev/chains/v2/slsa-tekton https://tekton.dev/chains/v2/slsa If you enable the docdb storage method is for any artifacts, configure docstore storage options. For more information about the go-cloud docstore URI format, see the docstore package documentation . Red Hat OpenShift Pipelines supports the following docstore services: firestore dynamodb Table 1.6. Chains configuration: Supported parameters for docstore storage Parameter Description Supported values Default value storage.docdb.url The go-cloud URI reference to a docstore collection. Used if the docdb storage method is enabled for any artifacts. firestore://projects/[PROJECT]/databases/(default)/documents/[COLLECTION]?name_field=name If you enable the grafeas storage method for any artifacts, configure Grafeas storage options. For more information about Grafeas notes and occurrences, see Grafeas concepts . To create occurrences, Red Hat OpenShift Pipelines must first create notes that are used to link occurrences. Red Hat OpenShift Pipelines creates two types of occurrences: ATTESTATION Occurrence and BUILD Occurrence. Red Hat OpenShift Pipelines uses the configurable noteid as the prefix of the note name. It appends the suffix -simplesigning for the ATTESTATION note and the suffix -intoto for the BUILD note. If the noteid field is not configured, Red Hat OpenShift Pipelines uses tekton-<NAMESPACE> as the prefix. Table 1.7. Chains configuration: Supported parameters for Grafeas storage Parameter Description Supported values Default value storage.grafeas.projectid The OpenShift Container Platform project in which the Grafeas server for storing occurrences is located. storage.grafeas.noteid Optional: the prefix to use for the name of all created notes. A string without spaces. storage.grafeas.notehint Optional: the human_readable_name field for the Grafeas ATTESTATION note. This attestation note was generated by Tekton Chains Optionally, you can enable additional uploads of binary transparency attestations. Table 1.8. Chains configuration: Supported parameters for transparency attestation storage Parameter Description Supported values Default value transparency.enabled Enable or disable automatic binary transparency uploads. true , false , manual false transparency.url The URL for uploading binary transparency attestations, if enabled. https://rekor.sigstore.dev Note If you set transparency.enabled to manual , only task runs and pipeline runs with the following annotation are uploaded to the transparency log: chains.tekton.dev/transparency-upload: "true" If you configure the x509 signature backend, you can optionally enable keyless signing with Fulcio. Table 1.9. Chains configuration: Supported parameters for x509 keyless signing with Fulcio Parameter Description Supported values Default value signers.x509.fulcio.enabled Enable or disable requesting automatic certificates from Fulcio. true , false false signers.x509.fulcio.address The Fulcio address for requesting certificates, if enabled. https://v1.fulcio.sigstore.dev signers.x509.fulcio.issuer The expected OIDC issuer. https://oauth2.sigstore.dev/auth signers.x509.fulcio.provider The provider from which to request the ID Token. google , spiffe , github , filesystem Red Hat OpenShift Pipelines attempts to use every provider signers.x509.identity.token.file Path to the file containing the ID Token. signers.x509.tuf.mirror.url The URL for the TUF server. USDTUF_URL/root.json must be present. https://sigstore-tuf-root.storage.googleapis.com If you configure the kms signature backend, set the KMS configuration, including OIDC and Spire, as necessary. Table 1.10. Chains configuration: Supported parameters for KMS signing Parameter Description Supported values Default value signers.kms.auth.address URI of the KMS server (the value of VAULT_ADDR ). signers.kms.auth.token Authentication token for the KMS server (the value of VAULT_TOKEN ). signers.kms.auth.oidc.path The path for OIDC authentication (for example, jwt for Vault). signers.kms.auth.oidc.role The role for OIDC authentication. signers.kms.auth.spire.sock The URI of the Spire socket for the KMS token (for example, unix:///tmp/spire-agent/public/api.sock ). signers.kms.auth.spire.audience The audience for requesting a SVID from Spire. 1.3. Secrets for signing data in Tekton Chains Cluster administrators can generate a key pair and use Tekton Chains to sign artifacts using a Kubernetes secret. For Tekton Chains to work, a private key and a password for encrypted keys must exist as part of the signing-secrets secret in the openshift-pipelines namespace. Currently, Tekton Chains supports the x509 and cosign signature schemes. Note Use only one of the supported signature schemes. To use the x509 signing scheme with Tekton Chains, store the x509.pem private key of the ed25519 or ecdsa type in the signing-secrets Kubernetes secret. 1.3.1. Signing using cosign You can use the cosign signing scheme with Tekton Chains using the cosign tool. Prerequisites You installed the Cosign tool. For information about installing the Cosign tool, see the Sigstore documentation for Cosign . Procedure Generate the cosign.key and cosign.pub key pairs by running the following command: USD cosign generate-key-pair k8s://openshift-pipelines/signing-secrets Cosign prompts you for a password and then creates a Kubernetes secret. Store the encrypted cosign.key private key and the cosign.password decryption password in the signing-secrets Kubernetes secret. Ensure that the private key is stored as an encrypted PEM file of the ENCRYPTED COSIGN PRIVATE KEY type. 1.3.2. Signing using skopeo You can generate keys using the skopeo tool and use them in the cosign signing scheme with Tekton Chains. Prerequisites You installed the skopeo tool. Procedure Generate a public/private key pair by running the following command: USD skopeo generate-sigstore-key --output-prefix <mykey> 1 1 Replace <mykey> with a key name of your choice. Skopeo prompts you for a passphrase for the private key and then creates the key files named <mykey>.private and <mykey>.pub . Encode the <mykey>.pub file using the base64 tool by running the following command: USD base64 -w 0 <mykey>.pub > b64.pub Encode the <mykey>.private file using the base64 tool by running the following command: USD base64 -w 0 <mykey>.private > b64.private Encode the passhprase using the base64 tool by running the following command: USD echo -n '<passphrase>' | base64 -w 0 > b64.passphrase 1 1 Replace <passphrase> with the passphrase that you used for the key pair. Create the signing-secrets secret in the openshift-pipelines namespace by running the following command: USD oc create secret generic signing-secrets -n openshift-pipelines Edit the signing-secrets secret by running the following command: Add the encoded keys in the data of the secret in the following way: apiVersion: v1 data: cosign.key: <Encoded <mykey>.private> 1 cosign.password: <Encoded passphrase> 2 cosign.pub: <Encoded <mykey>.pub> 3 immutable: true kind: Secret metadata: name: signing-secrets # ... type: Opaque 1 Replace <Encoded <mykey>.private> with the content of the b64.private file. 2 Replace <Encoded passphrase> with the content of the b64.passphrase file. 3 Replace <Encoded <mykey>.pub> with the content of the b64.pub file. 1.3.3. Resolving the "secret already exists" error If the signing-secret secret is already populated, the command to create this secret might output the following error message: Error from server (AlreadyExists): secrets "signing-secrets" already exists You can resolve this error by deleting the secret. Procedure Delete the signing-secret secret by running the following command: USD oc delete secret signing-secrets -n openshift-pipelines Re-create the key pairs and store them in the secret using your preferred signing scheme. 1.4. Authenticating to an OCI registry Before pushing signatures to an OCI registry, cluster administrators must configure Tekton Chains to authenticate with the registry. The Tekton Chains controller uses the same service account under which the task runs execute. To set up a service account with the necessary credentials for pushing signatures to an OCI registry, perform the following steps: Procedure Set the namespace and name of the Kubernetes service account. USD export NAMESPACE=<namespace> 1 USD export SERVICE_ACCOUNT_NAME=<service_account> 2 1 The namespace associated with the service account. 2 The name of the service account. Create a Kubernetes secret. USD oc create secret registry-credentials \ --from-file=.dockerconfigjson \ 1 --type=kubernetes.io/dockerconfigjson \ -n USDNAMESPACE 1 Substitute with the path to your Docker config file. Default path is ~/.docker/config.json . Give the service account access to the secret. USD oc patch serviceaccount USDSERVICE_ACCOUNT_NAME \ -p "{\"imagePullSecrets\": [{\"name\": \"registry-credentials\"}]}" -n USDNAMESPACE If you patch the default pipeline service account that Red Hat OpenShift Pipelines assigns to all task runs, the Red Hat OpenShift Pipelines Operator will override the service account. As a best practice, you can perform the following steps: Create a separate service account to assign to user's task runs. USD oc create serviceaccount <service_account_name> Associate the service account to the task runs by setting the value of the serviceaccountname field in the task run template. apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 spec: taskRunTemplate: serviceAccountName: build-bot 1 taskRef: name: build-push ... 1 Substitute with the name of the newly created service account. 1.5. Creating and verifying task run signatures without any additional authentication To verify signatures of task runs using Tekton Chains with any additional authentication, perform the following tasks: Create an encrypted x509 key pair and save it as a Kubernetes secret. Configure the Tekton Chains backend storage. Create a task run, sign it, and store the signature and the payload as annotations on the task run itself. Retrieve the signature and payload from the signed task run. Verify the signature of the task run. Prerequisites Ensure that the following components are installed on the cluster: Red Hat OpenShift Pipelines Operator Tekton Chains Cosign Procedure Create an encrypted x509 key pair and save it as a Kubernetes secret. For more information about creating a key pair and saving it as a secret, see "Signing secrets in Tekton Chains". In the Tekton Chains configuration, disable the OCI storage, and set the task run storage and format to tekton . In the TektonConfig custom resource set the following values: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: # ... chain: artifacts.oci.storage: "" artifacts.taskrun.format: tekton artifacts.taskrun.storage: tekton # ... For more information about configuring Tekton Chains using the TektonConfig custom resource, see "Configuring Tekton Chains". To restart the Tekton Chains controller to ensure that the modified configuration is applied, enter the following command: USD oc delete po -n openshift-pipelines -l app=tekton-chains-controller Create a task run by entering the following command: USD oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml 1 1 Replace the example URI with the URI or file path pointing to your task run. Example output taskrun.tekton.dev/build-push-run-output-image-qbjvh created Check the status of the steps by entering the following command. Wait until the process finishes. USD tkn tr describe --last Example output [...truncated output...] NAME STATUS ∙ create-dir-builtimage-9467f Completed ∙ git-source-sourcerepo-p2sk8 Completed ∙ build-and-push Completed ∙ echo Completed ∙ image-digest-exporter-xlkn7 Completed To retrieve the signature from the object stored as base64 encoded annotations, enter the following commands: USD tkn tr describe --last -o jsonpath="{.metadata.annotations.chains\.tekton\.dev/signature-taskrun-USDTASKRUN_UID}" | base64 -d > sig USD export TASKRUN_UID=USD(tkn tr describe --last -o jsonpath='{.metadata.uid}') To verify the signature using the public key that you created, enter the following command: 1 Replace path/to/cosign.pub with the path name of the public key file. Example output Verified OK 1.5.1. Additional resources Section 1.3, "Secrets for signing data in Tekton Chains" Section 1.2, "Configuring Tekton Chains" 1.6. Using Tekton Chains to sign and verify image and provenance Cluster administrators can use Tekton Chains to sign and verify images and provenances, by performing the following tasks: Create an encrypted x509 key pair and save it as a Kubernetes secret. Set up authentication for the OCI registry to store images, image signatures, and signed image attestations. Configure Tekton Chains to generate and sign provenance. Create an image with Kaniko in a task run. Verify the signed image and the signed provenance. Prerequisites Ensure that the following tools are installed on the cluster: Red Hat OpenShift Pipelines Operator Tekton Chains Cosign Rekor jq Procedure Create an encrypted x509 key pair and save it as a Kubernetes secret: USD cosign generate-key-pair k8s://openshift-pipelines/signing-secrets Provide a password when prompted. Cosign stores the resulting private key as part of the signing-secrets Kubernetes secret in the openshift-pipelines namespace, and writes the public key to the cosign.pub local file. Configure authentication for the image registry. To configure the Tekton Chains controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section. To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker config.json file containing the required credentials. USD oc create secret generic <docker_config_secret_name> \ 1 --from-file <path_to_config.json> 2 1 Substitute with the name of the docker config secret. 2 Substitute with the path to docker config.json file. Configure Tekton Chains by setting the artifacts.taskrun.format , artifacts.taskrun.storage , and transparency.enabled parameters in the chains-config object: USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}' USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}' USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}' Start the Kaniko task. Apply the Kaniko task to the cluster. USD oc apply -f examples/kaniko/kaniko.yaml 1 1 Substitute with the URI or file path to your Kaniko task. Set the appropriate environment variables. USD export REGISTRY=<url_of_registry> 1 USD export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> 2 1 Substitute with the URL of the registry where you want to push the image. 2 Substitute with the name of the secret in the docker config.json file. Start the Kaniko task. USD tkn task start --param IMAGE=USDREGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir="" --workspace name=dockerconfig,secret=USDDOCKERCONFIG_SECRET_NAME kaniko-chains Observe the logs of this task until all steps are complete. On successful authentication, the final image will be pushed to USDREGISTRY/kaniko-chains . Wait for a minute to allow Tekton Chains to generate the provenance and sign it, and then check the availability of the chains.tekton.dev/signed=true annotation on the task run. USD oc get tr <task_run_name> \ 1 -o json | jq -r .metadata.annotations { "chains.tekton.dev/signed": "true", ... } 1 Substitute with the name of the task run. Verify the image and the attestation. USD cosign verify --key cosign.pub USDREGISTRY/kaniko-chains USD cosign verify-attestation --key cosign.pub USDREGISTRY/kaniko-chains Find the provenance for the image in Rekor. Get the digest of the USDREGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest. Search Rekor to find all entries that match the sha256 digest of the image. USD rekor-cli search --sha <image_digest> 1 <uuid_1> 2 <uuid_2> 3 ... 1 Substitute with the sha256 digest of the image. 2 The first matching universally unique identifier (UUID). 3 The second matching UUID. The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation. Check the attestation. USD rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq 1.7. Additional resources Installing OpenShift Pipelines
|
[
"oc edit TektonConfig config",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: addon: {} chain: artifacts.taskrun.format: tekton config: {}",
"chains.tekton.dev/transparency-upload: \"true\"",
"cosign generate-key-pair k8s://openshift-pipelines/signing-secrets",
"skopeo generate-sigstore-key --output-prefix <mykey> 1",
"base64 -w 0 <mykey>.pub > b64.pub",
"base64 -w 0 <mykey>.private > b64.private",
"echo -n '<passphrase>' | base64 -w 0 > b64.passphrase 1",
"oc create secret generic signing-secrets -n openshift-pipelines",
"oc edit secret -n openshift-pipelines signing-secrets",
"apiVersion: v1 data: cosign.key: <Encoded <mykey>.private> 1 cosign.password: <Encoded passphrase> 2 cosign.pub: <Encoded <mykey>.pub> 3 immutable: true kind: Secret metadata: name: signing-secrets type: Opaque",
"Error from server (AlreadyExists): secrets \"signing-secrets\" already exists",
"oc delete secret signing-secrets -n openshift-pipelines",
"export NAMESPACE=<namespace> 1 export SERVICE_ACCOUNT_NAME=<service_account> 2",
"oc create secret registry-credentials --from-file=.dockerconfigjson \\ 1 --type=kubernetes.io/dockerconfigjson -n USDNAMESPACE",
"oc patch serviceaccount USDSERVICE_ACCOUNT_NAME -p \"{\\\"imagePullSecrets\\\": [{\\\"name\\\": \\\"registry-credentials\\\"}]}\" -n USDNAMESPACE",
"oc create serviceaccount <service_account_name>",
"apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 spec: taskRunTemplate: serviceAccountName: build-bot 1 taskRef: name: build-push",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: artifacts.oci.storage: \"\" artifacts.taskrun.format: tekton artifacts.taskrun.storage: tekton",
"oc delete po -n openshift-pipelines -l app=tekton-chains-controller",
"oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml 1",
"taskrun.tekton.dev/build-push-run-output-image-qbjvh created",
"tkn tr describe --last",
"[...truncated output...] NAME STATUS ∙ create-dir-builtimage-9467f Completed ∙ git-source-sourcerepo-p2sk8 Completed ∙ build-and-push Completed ∙ echo Completed ∙ image-digest-exporter-xlkn7 Completed",
"tkn tr describe --last -o jsonpath=\"{.metadata.annotations.chains\\.tekton\\.dev/signature-taskrun-USDTASKRUN_UID}\" | base64 -d > sig",
"export TASKRUN_UID=USD(tkn tr describe --last -o jsonpath='{.metadata.uid}')",
"cosign verify-blob-attestation --insecure-ignore-tlog --key path/to/cosign.pub --signature sig --type slsaprovenance --check-claims=false /dev/null 1",
"Verified OK",
"cosign generate-key-pair k8s://openshift-pipelines/signing-secrets",
"oc create secret generic <docker_config_secret_name> \\ 1 --from-file <path_to_config.json> 2",
"oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.taskrun.format\": \"in-toto\"}}' oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.taskrun.storage\": \"oci\"}}' oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"transparency.enabled\": \"true\"}}'",
"oc apply -f examples/kaniko/kaniko.yaml 1",
"export REGISTRY=<url_of_registry> 1 export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> 2",
"tkn task start --param IMAGE=USDREGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir=\"\" --workspace name=dockerconfig,secret=USDDOCKERCONFIG_SECRET_NAME kaniko-chains",
"oc get tr <task_run_name> \\ 1 -o json | jq -r .metadata.annotations { \"chains.tekton.dev/signed\": \"true\", }",
"cosign verify --key cosign.pub USDREGISTRY/kaniko-chains cosign verify-attestation --key cosign.pub USDREGISTRY/kaniko-chains",
"rekor-cli search --sha <image_digest> 1 <uuid_1> 2 <uuid_2> 3",
"rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/securing_openshift_pipelines/using-tekton-chains-for-openshift-pipelines-supply-chain-security
|
8.98. libvirt
|
8.98. libvirt 8.98.1. RHBA-2013:1581 - libvirt bug fix and enhancement update Updated libvirt packages that fix a number of bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fixes BZ# 846013 Previously, due to several issues, IPv6 was not handled properly during migration. With this update, migrations now succeed in the described scenario. BZ# 847822 Without manual configuration, the remote driver did not support connection to the session instance of the libvirtd daemon. This behavior could confuse users, who attempted to use such a configuration. With this update, connections that do not have the necessary manual configuration are not allowed by libvirt. BZ# 851075 Previously, the libvirt library was missing driver implementation for the ESX environment. As a consequence, a user could not configure any network for an ESX guest. The network driver has been implemented and a user now can configure networks for ESX guests as expected. BZ# 882077 Previously, libvirt reported raw QEMU errors when creating of snapshots failed, and the error message provided was confusing. With this update, libvirt now gives a clear error message when QEMU is not capable of making snapshots. BZ# 888503 The AMD family 15h processors CPU architecture consists of " modules " , which are represented both as separate cores and separate threads. Management applications needed to choose between one of the approaches, and libvirt did not provide enough information to do this. In addition, the management applications were not able to represent the modules in an AMD family 15h processors core according to their needs. The capabilities XML output now contains more information about the processor topology, so that the management applications can extract the information they need. BZ# 892079 Previously, the libvirtd daemon was unable to execute an s3 or s4 operation for a Microsoft Windows guest which ran the guest agent service. Consequently, this resulted in the " domain s4 fail " error message, due to the domain being destroyed. With this update, the guest is destroyed successfully and libvirtd no longer crashes. BZ# 894723 A virtual machine (VM) can be saved into a compressed file. Previously, when decompression of that file failed while libvirt was trying to resume the VM, libvirt removed the VM from the list of running VMs. However, it did not remove the corresponding QEMU process. With this update, the QEMU process is killed in such cases. Moreover, non-fatal decompression errors are now ignored and a VM can be successfully resumed if such an error occurs. BZ# 895294 Updating a network interface using the virDomainUpdateDeviceFlags API failed when a boot order was set for that interface. The update failed even if the boot order was set in the provided device XML. virDomainUpdateDeviceFlags API has been fixed to correctly parse boot order specification from the provided device XML and updating network interfaces with boot orders now works as expected. BZ# 895340 The libvirt library allows users to set Quality of Service (QoS) on a domain's Network Interface Controller (NIC). However, due to a bug in the implementation, certain values were not set correctly. As a consequence, the real throughput did not correspond with the one set in a domain XML. The underlying source code has been modified to set the correct values from the XML and the throughput now corresponds with the one set in the XML as expected. BZ# 895424 Hot unplug of vCPUs is not supported by QEMU in Red Hat Enterprise Linux 6. Therefore, an attempt to use this functionality failed, but the count of processors as remembered by the libvirt library was updated to the new number and remembered. With this update, libvrit now verifies if QEMU actually unplugged the CPUs so that the internal information is updated only when the unplug was successful. BZ# 895826 Previously, when a migration failed, the destination host started to relabel files because it was no longer using them. However, this behavior impacted the source host, which was still running. As a consequence, guests could lose the ability to write to disks. This update applies a patch to fix this bug so that files that are still in use are no longer relabeled in the described scenario. BZ# 895882 Python bindings for the libvirt library contained incorrect implementation of the getDomain() and getConnect() methods in the virDomainSnapshot class. Consequently, the Python client terminated unexpectedly with a segmentation fault. Python bindings now provide the proper domain() and connect() accessors that fetch Python objects stored internally within the virDomainSnapshot instance and crashes no longer occur. BZ# 896013 Previously, the libvirt library added a cache of storage file backing chains, rather than rediscovering the backing chain details on every operation. This cache was then used to decide which files to label for sVirt, but when libvirt switched over to use the cache, the code only populated when the kernel control groups (cgroups) were in use. On setups that did not use cgroups, sVirt was unable to properly label backing chain files due to the lack of backing chain cache information. This behavior caused a regression observed by guests being prevented from running. Now, populating the cache was moved earlier in the process, to be independent of cgroups, the cache results in more efficient sVirt operations, and now works whether or not cgroups are in effect. BZ# 903238 Occasionally, when users ran multiple virsh create ordestroy loops, a race condition could occur and the libvirtd daemon terminated unexpectedly with a segmentation fault. False error messages regarding the domain having already been destroyed to the caller also occurred. With this update, the outlined script is run and completes without libvirtd crashing. BZ# 903248 Previously, the libvirt library followed relative backing chains differently than QEMU. This resulted in missing sVirt permissions when libvirt could not follow the chain. With this update, relative backing files are now treated identically in libvirt and QEMU, and VDSM use of relative backing files functions properly. BZ# 903433 When the kernel control group (cgroups) were enabled, moving tasks among cgroups could, in rare occurrences, result in a race condition. Consequently, a guest could fail to start after repeating the start and stop commands tens of times using the virsh utility. With this update, the code that handles groups of threads has been optimized to prevent races while moving from one cgroup to another and guests now start as expected in the described scenario. BZ# 906299 Various memory leaks in the libvirtd daemon were discovered when users ran Coverity and Valgrind leak detection tools. This update addresses these issues, and libvirtd no longer leaks memory in the described scenario. BZ# 908073 Previously, when users started the guest with a sharable block CD-Rom, the libvirtd daemon failed unexpectedly due to accessing memory that had been already freed. This update addresses the aforementioned issue, and libvirtd no longer crashes in the described scenario. BZ# 911609 Due to a race condition in the libvirt client library, any application using libvirt could terminate unexpectedly with a segmentation fault. This happened when one thread executed the connection close callback, while another one freed the connection object, and the connection callback thread then accessed memory that had been already freed. This update fixes the possibility of freeing the callback data when they are still being accessed. BZ# 912179 When asked to create a logical volume with zero allocation, the libvirt library ran the lvcreate command to create a volume with no extends, which is not permitted. Creation of logical volumes with zero allocation failed and libvirt returned an error message that did not mention the correct error. Now, rather than asking for no extends, libvirt tries to create the volume with a minimal number of extends. The code has been also fixed to provide the correct error message when the volume creation process failes. As a result, logical volumes with zero allocation can now be successfully created using libvirt. BZ# 913244 When auto-port and port were not specified, but the tlsPort attribute was set to " -1 " , the tlsPort parameter specified in the QEMU command line was set to " 1 " instead of a valid port. Consequently, QEMU failed, because it was unable to bind a socket on the port. This update replaces the current QEMU driver code for managing port reservations with the new virPortAllocator APIs, and QEMU is now able to bind a socket on the port. BZ# 913363 The libvirt library could abort migration when domain's disks used unsafe cache settings even though they were not stored on a shared storage and libvirt was explicitly asked to copy all storage. As a consequence, migration without a shared storage was only possible with the VIR_MIGRATE_UNSAFE flag enabled. With this update, the test for safe disk cache settings is now limited only to shared storage because any setting is safe for locally stored disk images. BZ# 914677 Previously, the libvirt library was not tolerant of missing unpriv_sgio support in running kernel even though it was not necessary. Consequently, after upgrading the host system to Red Hat Enterprise Linux 6.5, users were unable to start domains using shareable block disk devices unless they rebooted the host into the new kernel. With this update, the check for unpriv_sgio support is only performed when it is really needed. As as result, libvirt is now able to start all domains that do not strictly require unpriv_sgio support regardless of host kernel support for it. BZ# 916315 Due to a bug in the libvirt code, two APIs, vidDomainBlockStatsFlags() and vidDomainDetachDeviceFlags(), were executed concurrently. As a consequence, the libvirtd daemon terminated unexpectedly. The underlying source code has been modified to make these APIs mutually exclusive so that the daemon no longer crashes in such a case. BZ# 917510 When a virtual machine (VM) with a managed save image was started with the " --force-boot " parameter that removed the managed save image, a flag holding the managed save state was not cleared. This caused that incorrect information was displayed and some operations regarding managed stave state failed. This bug has been fixed and the flag is now correctly cleared in the described scenario. BZ# 920205 At the end of migration, libvirt was waiting for the Simple Protocol For Computing Environments (SPICE) data to be migrated to the destination QEMU, before it resumed the domain on the destination host. This significantly increased the waiting time when the domain was not running on any host. With this update, the underlying code has been modified to not to wait until the end of the SPICE migration. As a result, the resume is done as soon as possible without any significant delay. BZ# 920441 Previously, the listen attribute in QEMU cookie files was discarded. Consequently, if the user had different networks in use, one for management and migration, and one for Virtual Network Computing (VNC) and SPICE, the remote host name was passed to QEMU via the client_migrate_info flag. This caused the SPICE client to be disconnected upon migration of a virtual machine. With this update, the remote listen address is passed instead and the SPICE client is no longer disconnected in the described scenario. BZ# 921387 Due to the use-after-free bug in the logical storage back end, the libvirtd daemon could terminate unexpectedly when deleting the logical storage pool. The underlying source code has been modified and the daemon now works as expected when deleting logical volumes. BZ# 921538 Due to a race condition in the client side of libvirt's RPC implementation, a client connection that was closed by the server could be freed, even though other threads were still waiting for APIs sent through this connection to finish. As a consequence, the other threads could have accessed memory that had already been freed and the client terminated unexpectedly with a segmentation fault. With this update the connection is freed only after all threads process their API calls and report errors to their callers. BZ# 921777 Previously, a lock used when dealing with transient networks was incorrect. Consequently, when the define API was used on a transient network, the network object lock was not unlocked as expected. The underlying source code has been modified and the object lock is now unlocked correctly. BZ# 922153 Previously, the libvirt library made control group (cgroup) requests on files that it should not have. With older kernels, such nonsensical cgroup requests were ignored; however, newer kernels are stricter, resulting in libvirt logging spurious warnings and failures to the libvirtd and audit logs. The audit log failures displayed by the ausearch tool were similar to the following: With this update, libvirt no longer attempts the nonsensical cgroup actions, leaving only valid attempts in the libvirtd and audit logs. BZ# 922203 Previously, the libvirt library used the incorrect variable when constructing audit messages. This led to invalid audit messages, causing the ausearch utility to format certain entries as having " path=(null) " instead of the correct path. This could prevent ausearch from locating events related to cgroup device Access Control Lists (ACL) modifications for guests managed by libvirt. With this update, the audit messages are generated correctly, preventing loss of audit coverage. BZ# 923613 Previously, the vol-download command was described incorrectly in the virsh(1) manual page. With this update, the command description has been fixed. BZ# 923946 When SELinux was disabled on a host, or the QEMU driver was configured not to use it, and the domain XML configuration contained an explicit seclabel option, the code parsed the seclabel option, but ignored it later when it was generating labels on domain start, and created a new and empty seclabel entry [seclabeltype='none'/]. Consequently, a migration between two hosts running Red Hat Enterprise Linux 6.5 failed with the following error message: With this update, if the seclabel entry already exists, a new one is no longer created, and the migration works as expected in the described scenario. BZ# 923963 Previously, there was an Application Binary Interface (ABI) inconsistency in messages of the kernel netlink protocol between certain versions of Red Hat Enterprise Linux. When the libvirt library sent a netlink NLM_F_REQUEST message and the libvirt binary had been built using kernel header files from a different version of the kernel than the version of the machine running libvirt, errors were returned. Consequently, Peripheral Component Interconnect (PCI) passthrough device assignments of SR-IOV network devices failed when they used the [interface type='hostdev'] option, or when the libvirt network was set with the [forward mode='hostdev'] option. In such a case, the following error message or a similar one was returned: With this update, libvirt retries the NLM_F_REQUEST message formatted appropriately for all versions of the kernel. Now, a single libvirt binary successfully assigns SR-IOV network devices to a guest using PCI passthrough on a host running any version of Red Hat Enterprise Linux 6 kernel. BZ# 924571 Previously, the vol-name command of the virsh utility printed a NULL string when there was no option for specifying the pool. Consequently, an error message was returned, which could confuse users. The command has been modified to not require to specify an option in case where it is not needed. As a result, the error message is no longer returned in the described scenario. BZ# 924648 The QEMU driver currently does not support increasing of the maximum memory size. However, this ability was documented in the virsh(1) manual page. With this update, the manual page has been corrected. BZ# 928661 Previously, part of the code refactoring to fix another bug, left a case where locks were cleaned up incorrectly. As a consequence, the libvirtd daemon could terminate unexpectedly on certain migration to file scenarios. After this update, the lock cleanup paths were fixed and libvirtd no longer crashed when saving a domain to a file. BZ# 947387 The libvirt library uses side files to store the internal state of managed domains in order to re-read the state upon the libvirtd service restart. However, if a domain state was saved in an inconsistent state, the state was not re-read and the corresponding domain was lost. As a consequence, the domain could disappear. After this update, when the libvirtd service is saving the internal state of a domain, the consistent internal state is saved and domains which may break it are disallowed from starting. As a result, the domain is no longer forgotten. BZ# 948678 Previously, attempts to clone a storage volume that was not in the RAW format from a directory pool, file system pool, or NFS pool, to a LVM pool, using the " virsh vol-create-from " command, failed with an " unknown file format " error message. This update fixes this bug by treating output block devices as the RAW file format and storage volumes can now be cloned as expected. BZ# 950286 Under certain conditions, when a connection was closed, guests set to be automatically destroyed failed to be destroyed. As a consequence, the libvirtd daemon terminated unexpectedly. A series of patches addressing various crash scenarios has been provided and libvirtd no longer crashes while auto-destroying guests. BZ# 951227 When running the libvirt test suite on a machine under a heavy load, the test could end up in a deadlock. Since the test suite was run during an RPM build, the build never finished if a deadlock occurred. This update fixes the handling of an event loop used in the test suite, and the test suite no longer hangs in the described scenario. BZ# 955575 Previously, the VirtualHW application version 9 was not set as supported even though the corresponding ESX version 5.1 was set to be supported earlier. As a consequence, when a connection was made to an ESX 5.1 server with a guest using virtualHW version 9, the following error was displayed: This update adds VirtualHW version 9 into the list of supported versions and the aforementioned error message is no longer displayed in this scenario. BZ# 960683 Libvirt's internal data structures which hold information about the topology of the host and guest, are limited in size to avoid the possibility of a denial-of-service (DoS) attack on the daemon. However, these limits were too strict and did not take into account the possibility that hosts with 4096 CPUs might be used with libvirt. After this update, the limits have been increased to allow scalability even on larger systems. BZ# 961034 Prior to this update, the F_DUPFD_CLOEXEC operation with the fcntl() function expected a single argument, specifying the minimum file descriptor (FD) number, but none was provided. Consequently, random stack data were accessed as the FD number and a libvirt live migration could then terminate unexpectedly. This update ensures that the argument is provided in the described scenario, thus fixing this bug. BZ# 964359 Previously, the libvirtd daemon set up supplemental groups of child processes by making a call between the fork() and exec() functions to the getpwuid_r()function, which could cause a mutual exclusion (mutex). As a consequence, if another thread was already holding the getpwuid_r mutex at the time libvirtd called the fork() function, the forked child process deadlocked, which in turn caused libvirtd to become unresponsive. The code to compute the set of supplemental groups has been refactored so that no mutex is required after fork. As a result, the deadlock scenario is no longer possible. BZ# 965442 Previously, the libvirt library did not update the pool information after adding, removing, or resizing a volume. As a consequence, the user had to refresh the pool using the "virsh pool-refresh" command to get the correct pool information after these actions. After this update, the pool information is automatically updated after adding, removing, or resizing a volume. BZ# 970495 Previously, the virsh utility considered the "--pool" argument of the "vol-create" and "vol-create-as" commands to be a pool name. As a consequence, vol-create and vol-create-as virsh commands did not work when a pool was specified by its Universally Unique Identifier (UUID), even though they were documented to accept both name and UUID for pool specification. With this update, virsh has been fixed to look up a pool both by name and UUID. As a result, both virsh commands now work according to their documentation. BZ# 971485 Previously, if the user had not specified a Virtual Network Computing (VNC) address in their domain XML, the one from the qemu.conf file was used. However, upon migrating, there was no difference between cases where the listen address was set by user in the XML directly or copied from the qemu.conf file. As a consequence, a domain could not be migrated. After this update, if the listen address is copied from qemu.conf, it is not transferred to the destination. As a result, a domain can be migrated successfully. BZ# 971904 Previously, the libvirt library's logging function that was passed to the libudev library did not handle strings with multiple parameters correctly. As a consequence, the libvirtd daemon could terminate unexpectedly when libudev logged a message. After this update, libvirt now handles multiple parameters correctly. As a result, libvirtd no longer crashes when libudev logs messages. BZ# 975201 Previously, the libvirt library only loaded one Certification Authority (CA) certificate from the cacert.pem file even though the file contained several chained CA certificates. As a consequence, libvirt failed to validate client and server certificates when they were both signed by intermediate CA certificates, sharing a common ancestor CA. After this update, the underlying code has been fixed to load all CA certificates. As a result, the CA certificate validation code correctly works when a client and server certificates are both signed by intermediate CA certificate, sharing a common ancestor CA. BZ# 975751 Previously, due to loader Hypervisor versions, many features were available only for guests with only one display. As a consequence, guests with two displays could not properly be defined on the QEMU hypervisor and some other features were not properly taking the second display into consideration. With this update, the ability to define more display types and all one-display assumptions were fixed in all relevant code. As a result, domains with multiple displays can now be defined, properly migrated, and started. BZ# 976401 The SPICE protocol can be set to listen on the given IP address or obtain the listening IP address from the given network. QEMU does not allow changing the SPICE listening IP address at runtime, therefore the libvirt library verifies this IP address with every user's update of SPICE settings on a guest. A regression bug in the libvirt code caused libvirt to incorrectly evaluate this listening IP address check if the user had SPICE set to listen on the given network because the user's XLM request contained both, the listening IP address and network address. Consequently, the user's operation was rejected. With this update, libvirt considers also the type of the listening IP address when comparing an IP address from the user's request with the current listening IP address. The user is now able to update SPICE settings on a guest as expected in this scenario. BZ# 977961 When migrating, the libvirtd daemon leaked migration Uniform Resource Identifier (URI) on a destination guest. A patch has been provided to fix this bug and the migration URI is now freed correctly. BZ# 978352 Prior to this update, the libvirtd daemon leaked memory in the virCgroupMoveTask() function. A fix has been provided which prevents libvirtd from incorrect management of memory allocations. BZ# 978356 Previously, the libvirtd daemon was accessing one byte before the array in the virCgroupGetValueStr() function. This bug has been fixed and libvirtd now stays within array bounds. BZ# 979330 Previously, the libvirt library depended on a "change" notification from the kernel to indicate that it should change the name of the device driver bound to a device. However, this change notification was not sent. As a consequence, the output from the "virsh nodedev-dumpxml" command always showed the device driver that was bound to the device at the time libvirt was started and not the currently-bound driver. This bug has been fixed and libvirt now manually updates the driver name every time a "nodedev-dumpxml" command is executed, rather than depending on a change notification. As a result, the driver name form the output of "nodedev-dumpxml" is always correct. BZ# 980339 Previously, if an incorrect device name was given in the <pf> element of a libvirt network definition, libvirt terminated unexpectedly when a guest attempted to create an interface using that network. With this update, libvirt now validates the <pf> device name to verify that it exists and that it is an sriov-capable network device. As a result, libvirt no longer crashes when a network with incorrect <pf> is referenced. Instead, it logs an appropriate error message and prevents the operation. BZ# 983539 Previously, the virStorageBackendFileSystemMount() function returned success even if the mount command had failed. As a consequence, libvirt showed the pool as running even though it was unusable. After this update, an error is displayed if the mount command has failed. As a result, libvirt no longer displays a success message when the mount command fails. BZ# 999107 Due to an omission in the libvirt code, the VLAN tag for a hostdev-based network (a network which is a pool of SRIOV virtual functions to be assigned to guests via PCI device assignment) was not being properly set in the hardware device. With this update, the missing code has been provided and a VLAN tag set in the network definition is now properly presented to the devices as they are assigned to guests. BZ# 1001881 Previously, the libvirt library was erroneously attempting to use the same alias name for multiple hostdev network devices. As a consequence, it was impossible to start a guest that had more than one hostdev network device in its configuration. With this update, libvirt now ensures that each device has a different alias name. As a result, it is now possible to start a guest with multiple hostdev network devices in its configuration. BZ# 1002790 The description of the blockcopy command in the virsh(1) manual page was identical to the description of the blockpull command. The correct descriptions have been provided with this update. BZ# 1006710 Previously, when parsing the domain XML with an "auto" numatune placement and the "nodeset" option was specified, the nodeset bitmap was freed twice. As a consequence, the libvirtd daemon terminated unexpectedly due to the double freeing. After this update, libvirtd now sets the pointer to NULL after freeing it. As a result, libvirtd no longer crashes in this scenario. BZ# 1009886 Previously, due to code movement, there was an invalid job used for querying for the SPICE migration status. As a consequence, when migrating a domain with a Simple Protocol for Independent Computing Environments (SPICE) seamless migration and using the domjobinfo command to request information on the same domain at the same time, the libvirtd daemon terminated unexpectedly. After this update, the job has been set properly and libvirtd no longer crashes in this scenario. BZ# 1011981 Whereas the status command of libvirt-guests init script returned the " 0 " value when libvirt-guests service was stopped, Linux Standard Base (LSB) required a different value ( " 3 " ) in such case. Consequently, other scripts relying on the return value could not distinguish whether the service was running or not. The libvirt-guests script has been fixed to conform with LSB and the " service libvirt-guests status " command now returns the correct value in the described scenario. BZ# 1013758 Previously, the libvirt library contained a heuristic to determine the limit for maximum memory usage by a QEMU process. If the limit was reached, the kernel just killed the QEMU process and the domain was killed as well. This, however, cannot be guessed correctly. As a consequence, domains were killed randomly. With this update, the heuristic has been dropped and domains are not killed by the kernel anymore. Enhancements BZ# 803602 This enhancement adds the ability to specify a share policy for domain's Virtual Network Computing (VNC) console. Latest changes in QEMU behavior from shared to exclusive VNC caused certain deployments, which used only shared VPN, to stop working. With a new attribute, " sharePolicy " , users are able to change the policy from exclusive to share and such deployments now work correctly. BZ# 849796 This enhancement introduces QEMU's native GlusterFS support. Users are now able to add a disk image stored on the GlusterFS volumes to a QEMU domain as a network disk. BZ# 851455 Due to security reasons, the libvirt library uses by default only ports larger than 1023 ( " unprivileged ports " ) for Network Address Translation (NAT) of network traffic from guests. However, sometimes the guests need access to network services that are only available if a privileged port is used. This enhancement provides a new element, " <nat> " , which allows the user to specify both a port or an address range to use for NAT of network traffic. BZ# 878765 This update adds a missing description about the " migrateuri " parameter of the " migrate " command to the virsh(1) manual page. BZ# 896604 With this enhancement, the libvirt library now supports the ram_size parameter. Users are now able to set the RAM memory when using multiple heads in one Peripheral Component Interconnect (PCI) device. BZ# 924400 The QEMU guest agent now supports enabling and disabling of guest CPUs. With this enhancement, support for this feature has been added to the libvirt library so that users are now able to use libvirt APIs to disable CPUs in a guest for performance and scalability reasons. BZ# 928638 Domain Name System (DNS) servers and especially root DNS servers, discourage forwarding of DNS requests that are not fully qualified domain names, that is, which include the domain as well as the host name. Also, the dnsmasq processes started by libvirt to service guests on its virtual networks prohibit forwarding such requests. However, there are certain circumstances where this is desirable. This update adds the permission for upstream forwarding of (DNS) requests with unqualified domain names. The libvirt library now provides an option in its network configuration to allow forwarding of DNS requests with non-qualified hostnames. The "forwardPlainNames='yes'" option must be added as an attribute to the <dns> element of a network, after which such forwards are allowed. BZ# 947118 Support for locking a domain's memory in the host's memory has been added to the libvirt library. This update enables users to avoid domain's memory pages to be swapped, and thus to avoid the latency in domain execution caused by swapping. Users can now configure domains to always be present in the host memory. BZ# 956826 QEMU I/O throttling provides a fine-grained I/O control in virtual machines and provides an abstraction layer on top of the underlying storage devices. BZ# 826315 , BZ# 822306 A new pvpanic virtual device can be wired into the virtualization stack and a guest panic can cause libvirt to send a notification event to management applications. This feature is introduced in Red Hat Enterprise Linux 6.5 as a Technology Preview. Note that enabling the use of this device requires the use of additional qemu command line options; this release does not include any supported way for libvirt to set those options. BZ# 1014198 Previously, the virDomainDeviceUpdateFlags() function in the libvirt library allowed users to update some configuration on a domain device while the domain was still running. Consequently, when updating Network Interface Controller (NIC), the QoS could not be changed because of a missing implementation. With this update, the missing implementation has been added, and QoS can now be updated on a NIC. Users of libvirt are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. After installing the updated packages, libvirtd will be restarted automatically. 8.98.2. RHBA-2013:1748 - libvirt bug fix update Updated libvirt packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remote management of virtualized systems. Bug Fix BZ# 1029632 When two clients tried to start the same transient domain, libvirt may have not properly detected that the same domain had already been being started. Consequently, more than one QEMU process could run for the same domain while libvirt did not know about them. With this update, libvirt has been fixed to properly check whether the same domain is not already being started, and thus avoids starting more than one QEMU process for the same domain. Users of libvirt are advised to upgrade to these updated packages, which fix this bug. After installing the updated packages, libvirtd will be restarted automatically.
|
[
"root [date] - failed cgroup allow path rw /dev/kqemu",
"libvirtError: XML error: missing security model when using multiple labels",
"error dumping (eth3) (3) interface: Invalid argument",
"internal error Expecting VMX entry 'virtualHW.version' to be 4, 7 or 8 but found 9"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libvirt
|
Appendix F. Notable Changes in IdM
|
Appendix F. Notable Changes in IdM Certain IdM versions introduce new commands or replace existing ones. Additionally, sometimes configuration or installation procedures change extensively. This appendix describes the most important changes. For a more detailed list of changes, see the Red Hat Enterprise Linux (RHEL) 7 Release Notes for the individual versions. IdM 4.6 running on RHEL 7.7 The ipa-cert-fix utility has been added to renew system certificates when IdM is offline. For details, see Section 26.2.3, "Renewing Expired System Certificates When IdM is Offline" . IdM now supports IP addresses in the SAN extension of certificates: in certain situations, administrators need to issue certificates with an IP address in the Subject Alternative Name (SAN) extension. Starting with this release, administrators can set an IP address in the SAN extension if the address is managed in the IdM DNS service and associated with the subject host or service principal. IdM now prevents using single-label domain names, for example .company. The IdM domain must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com. For further changes in this release, see the following sections in the Red Hat Enterprise Linux 7.7 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.6 running on RHEL 7.6 For changes in this release, see the following sections in the Red Hat Enterprise Linux 7.6 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.5 running on RHEL 7.5 For changes in this release, see the following sections in the Red Hat Enterprise Linux 7.5 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.5 running on RHEL 7.4 This version changed the SSL back end for client HTTPS connections from Network Security Services (NSS) to OpenSSL. As a consequence, the Registration Authority (RA) stores now its certificate in the /var/lib/ipa/ directory instead of an NSS database. For further changes in this release, see the following sections in the Red Hat Enterprise Linux 7.4 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.4 running on RHEL 7.3 The new ipa replica-manage clean-dangling-ruv command enables administrators to remove all relative update vectors (RUV) from an uninstalled replica. The new ipa server-del command enables administrators to uninstall an IdM server. The following commands introduced in this version enable administrators to manage IdM Certificate Authorities (CA): ipa ca-add ipd ca-del ipa ca-enable ipa ca-disble ipa ca-find ipa ca-mod ipa ca-show The following commands introduced in this version replace the ipa-replica manage command to manage replication agreements: ipa topology-configure ipa topologysegment-mod ipa topologysegment-del ipa topologysuffix-add ipa topologysuffix-show ipa topologysuffix-verify The following commands introduced in this version enable administrators to display a list of IdM servers stored in the cn=masters,cn=ipa,cn=etc, domain_suffix entry: ipa server-find ipa server-show The certmonger helper scripts have been moved from the /usr/lib64/ipa/certmonger/ to the /usr/libexec/ipa/certmonger/ directory. This version introduced domain levels and the following commands to display and set the domain level: ipa domainlevel-set ipa domainlevel-show For further changes in this release, see the following sections in the Red Hat Enterprise Linux 7.3 Release Notes : New Features - Authentication and Interoperability Notable Bug Fixes - Authentication and Interoperability IdM 4.2 running on RHEL 7.2 Support for multiple certificate profiles and user certificates: Identity Management now supports multiple profiles for issuing server and other certificates instead of only supporting a single server certificate profile. The profiles are stored in the Directory Server and shared between IdM replicas. In addition, the administrator can now issue certificates to individual users. Previously, it was only possible to issue certificates to hosts and services. For further changes in this release, see the New Features - Authentication and Interoperability section in the Red Hat Enterprise Linux 7.2 Release Notes . IdM 4.1 running on RHEL 7.1 The following commands introduced in this version replace the ipa-getkeytab -r command to retrieve keytabs and set retrieval permissions: ipa-host-allow-retrieve-keytab ipa-host-disallow-retrieve-keytab ipa-host-allow-create-keytab ipa-host-disallow-create-keytab ipa-service-allow-retrieve-keytab ipa-service-disallow-retrieve-keytab ipa-service-allow-create-keytab ipa-service-disallow-create-keytab For further changes in this release, see the New Features - Authentication and Interoperability section in the Red Hat Enterprise Linux 7.1 Release Notes . IdM 3.3 running on RHEL 7.0 For changes in this release, see the New Features - Authentication and Interoperability section in the Red Hat Enterprise Linux 7.0 Release Notes .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/notable_changes_in_idm
|
Chapter 6. Red Hat Process Automation Manager roles and users
|
Chapter 6. Red Hat Process Automation Manager roles and users To access Business Central or KIE Server, you must create users and assign them appropriate roles before the servers are started. You can create users and roles when you install Business Central or KIE Server. If both Business Central and KIE Server are running on a single instance, a user who is authenticated for Business Central can also access KIE Server. However, if Business Central and KIE Server are running on different instances, a user who is authenticated for Business Central must be authenticated separately to access KIE Server. For example, if a user who is authenticated on Business Central but not authenticated on KIE Server tries to view or manage process definitions in Business Central, a 401 error is logged in the log file and the Invalid credentials to load data from remote server. Contact your system administrator. message appears in Business Central. This section describes Red Hat Process Automation Manager user roles. Note The admin , analyst , developer , manager , process-admin , user , and rest-all roles are reserved for Business Central. The kie-server role is reserved for KIE Server. For this reason, the available roles can differ depending on whether Business Central, KIE Server, or both are installed. admin : Users with the admin role are the Business Central administrators. They can manage users and create, clone, and manage repositories. They have full access to make required changes in the application. Users with the admin role have access to all areas within Red Hat Process Automation Manager. analyst : Users with the analyst role have access to all high-level features. They can model and execute their projects. However, these users cannot add contributors to spaces or delete spaces in the Design Projects view. Access to the Deploy Execution Servers view, which is intended for administrators, is not available to users with the analyst role. However, the Deploy button is available to these users when they access the Library perspective. developer : Users with the developer role have access to almost all features and can manage rules, models, process flows, forms, and dashboards. They can manage the asset repository, they can create, build, and deploy projects. Only certain administrative functions such as creating and cloning a new repository are hidden from users with the developer role. manager : Users with the manager role can view reports. These users are usually interested in statistics about the business processes and their performance, business indicators, and other business-related reporting. A user with this role has access only to process and task reports. process-admin : Users with the process-admin role are business process administrators. They have full access to business processes, business tasks, and execution errors. These users can also view business reports and have access to the Task Inbox list. user : Users with the user role can work on the Task Inbox list, which contains business tasks that are part of currently running processes. Users with this role can view process and task reports and manage processes. rest-all : Users with the rest-all role can access Business Central REST capabilities. kie-server : Users with the kie-server role can access KIE Server REST capabilities. This role is mandatory for users to have access to Manage and Track views in Business Central.
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/roles-users-con_planning
|
10.10. Quorum Disk Does Not Appear as Cluster Member
|
10.10. Quorum Disk Does Not Appear as Cluster Member If you have configured your system to use a quorum disk but the quorum disk does not appear as a member of your cluster, you can perform the following steps. Review the /var/log/cluster/qdiskd.log file. Run ps -ef | grep qdisk to determine if the process is running. Ensure that <quorumd...> is configured correctly in the /etc/cluster/cluster.conf file. Enable debugging output for the qdiskd daemon. For information on enabling debugging in the /etc/cluster/cluster.conf file, see Section 8.7, "Configuring Debug Options" . For information on enabling debugging using luci , see Section 4.5.6, "Logging Configuration" . For information on enabling debugging with the ccs command, see Section 6.14.4, "Logging" . Note that it may take multiple minutes for the quorum disk to register with the cluster. This is normal and expected behavior.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-qdisknotmember-CA
|
Chapter 1. Overview of AMQ Streams
|
Chapter 1. Overview of AMQ Streams AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. This guide provides instructions for evaluating a working environment of AMQ Streams. The steps describe how to get a AMQ Streams deployment up-and-running as quickly as possible. Before trying AMQ Streams, it is useful to understand its capabilities and how you might wish to use it. This chapter introduces some of the key concepts behind Kafka, and also provides a brief overview of the AMQ Streams Operators. Operators are a method of packaging, deploying, and managing an OpenShift application. AMQ Streams Operators extend OpenShift functionality, automating common and complex tasks related to a Kafka deployment. By implementing knowledge of Kafka operations in code, Kafka administration tasks are simplified and require less manual intervention. 1.1. Kafka capabilities The underlying data stream-processing capabilities and component architecture of Kafka can deliver: Microservices and other applications to share data with extremely high throughput and low latency Message ordering guarantees Message rewind/replay from data storage to reconstruct an application state Message compaction to remove old records when using a key-value log Horizontal scalability in a cluster configuration Replication of data to control fault tolerance Retention of high volumes of data for immediate access 1.2. Kafka use cases Kafka's capabilities make it suitable for: Event-driven architectures Event sourcing to capture changes to the state of an application as a log of events Message brokering Website activity tracking Operational monitoring through metrics Log collection and aggregation Commit logs for distributed systems Stream processing so that applications can respond to data in real time 1.3. How AMQ Streams supports Kafka AMQ Streams provides container images and Operators for running Kafka on OpenShift. AMQ Streams Operators are fundamental to the running of AMQ Streams. The Operators provided with AMQ Streams are purpose-built with specialist operational knowledge to effectively manage Kafka. Operators simplify the process of: Deploying and running Kafka clusters Deploying and running Kafka components Configuring access to Kafka Securing access to Kafka Upgrading Kafka Managing brokers Creating and managing topics Creating and managing users 1.4. Operators AMQ Streams provides Operators for managing a Kafka cluster running within an OpenShift cluster. Cluster Operator Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, and the Entity Operator Entity Operator Comprises the Topic Operator and User Operator Topic Operator Manages Kafka topics User Operator Manages Kafka users The Cluster Operator can deploy the Topic Operator and User Operator as part of an Entity Operator configuration at the same time as a Kafka cluster. Operators within the AMQ Streams architecture 1.5. Document Conventions Replaceables In this document, replaceable text is styled in monospace , with italics, uppercase, and hyphens. For example, in the following code, you will want to replace MY-NAMESPACE with the name of your namespace:
|
[
"sed -i 's/namespace: .*/namespace: MY-NAMESPACE /' install/cluster-operator/*RoleBinding*.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/evaluating_amq_streams_on_openshift/overview-str
|
Chapter 38. Adding/Removing a Logical Unit Through rescan-scsi-bus.sh
|
Chapter 38. Adding/Removing a Logical Unit Through rescan-scsi-bus.sh The sg3_utils package provides the rescan-scsi-bus.sh script, which can automatically update the logical unit configuration of the host as needed (after a device has been added to the system). The rescan-scsi-bus.sh script can also perform an issue_lip on supported devices. For more information about how to use this script, refer to rescan-scsi-bus.sh --help . To install the sg3_utils package, run yum install sg3_utils . Known Issues With rescan-scsi-bus.sh When using the rescan-scsi-bus.sh script, take note of the following known issues: In order for rescan-scsi-bus.sh to work properly, LUN0 must be the first mapped logical unit. The rescan-scsi-bus.sh can only detect the first mapped logical unit if it is LUN0 . The rescan-scsi-bus.sh will not be able to scan any other logical unit unless it detects the first mapped logical unit even if you use the --nooptscan option. A race condition requires that rescan-scsi-bus.sh be run twice if logical units are mapped for the first time. During the first scan, rescan-scsi-bus.sh only adds LUN0 ; all other logical units are added in the second scan. A bug in the rescan-scsi-bus.sh script incorrectly executes the functionality for recognizing a change in logical unit size when the --remove option is used. The rescan-scsi-bus.sh script does not recognize ISCSI logical unit removals.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/logical-unit-add-remove
|
Chapter 4. User-provisioned infrastructure
|
Chapter 4. User-provisioned infrastructure 4.1. Preparing to install a cluster on AWS You prepare to install an OpenShift Container Platform cluster on AWS by completing the following steps: Verifying internet connectivity for your cluster. Configuring an AWS account . Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. Preparing the user-provisioned infrastructure. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, manually creating long-term credentials for AWS or configuring an AWS cluster to use short-term credentials with Amazon Web Services Security Token Service (AWS STS). 4.1.1. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.1.2. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.1.3. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.1.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.1.5. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 4.2. Installation requirements for user-provisioned infrastructure on AWS Before you begin an installation on infrastructure that you provision, be sure that your AWS environment meets the following installation requirements. For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 4.2.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 4.2.1.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.2.1.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 4.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 4.2.1.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 4.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 4.2.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 4.2.3. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 4.2.3.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 4.2.3.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 4.2.4. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 4.3. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribePublicIpv4Pools (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:DisassociateAddress (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 4.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 4.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener elasticloadbalancing:SetSecurityGroups Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 4.6. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagInstanceProfile iam:TagRole Note If you specify an existing IAM role in the install-config.yaml file, the following IAM permissions are not required: iam:CreateRole , iam:DeleteRole , iam:DeleteRolePolicy , and iam:PutRolePolicy . If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 4.7. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 4.8. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketTagging s3:PutEncryptionConfiguration Example 4.9. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 4.10. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 4.11. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 4.12. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 4.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 4.14. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 4.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 4.16. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole Example 4.17. Required permissions for enabling Bring your own public IPv4 addresses (BYOIP) feature for installation ec2:DescribePublicIpv4Pools ec2:DisassociateAddress 4.2.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . 4.3. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates In OpenShift Container Platform version 4.17, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 4.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You prepared the user-provisioned infrastructure. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 4.3.2. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 4.3.2.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 4.3.2.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 4.3.2.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.3.2.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 4.3.3. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 4.3.4. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 4.3.4.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 4.18. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.5. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 4.3.5.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 4.19. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.11" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.11" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . You can view details about your hosted zones by navigating to the AWS Route 53 console . See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 4.3.6. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 4.3.6.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 4.20. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.7. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 4.3.8. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 4.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-019b3e090bb062842 ap-east-1 ami-0cb76d97f77cda0a1 ap-northeast-1 ami-0d7d4b329e5403cfb ap-northeast-2 ami-02d3789d532feb517 ap-northeast-3 ami-08b82c4899109b707 ap-south-1 ami-0c184f8b5ad8af69d ap-south-2 ami-0b0525037b9a20e9a ap-southeast-1 ami-0dbee0006375139a7 ap-southeast-2 ami-043072b1af91be72f ap-southeast-3 ami-09d8bbf16b228139e ap-southeast-4 ami-01c6b29e9c57b434b ca-central-1 ami-06fda1fa0b65b864b ca-west-1 ami-0141eea486b5e2c43 eu-central-1 ami-0f407de515454fdd0 eu-central-2 ami-062cfad83bc7b71b8 eu-north-1 ami-0af77aba6aebb5086 eu-south-1 ami-04d9da83bc9f854fc eu-south-2 ami-035d487abf54f0af7 eu-west-1 ami-043dd3b788dbaeb1c eu-west-2 ami-0c7d0f90a4401b723 eu-west-3 ami-039baa878e1def55f il-central-1 ami-07d305bf03b2148de me-central-1 ami-0fc457e8897ccb41a me-south-1 ami-0af99a751cf682b90 sa-east-1 ami-04a7300f64ee01d68 us-east-1 ami-01b53f2824bf6d426 us-east-2 ami-0565349610e27bd41 us-gov-east-1 ami-0020504fa043fe41d us-gov-west-1 ami-036798bce4722d3c2 us-west-1 ami-0147c634ad692da52 us-west-2 ami-0c65d71e89d43aa90 Table 4.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-0e585ef53405bebf5 ap-east-1 ami-05f32f1715bb51bda ap-northeast-1 ami-05ecb62bab0c50e52 ap-northeast-2 ami-0a3ffb2c07c9e4a8d ap-northeast-3 ami-0ae6746ea17d1042c ap-south-1 ami-00deb5b08c86060b8 ap-south-2 ami-047a47d5049781e03 ap-southeast-1 ami-09cb598f0d36fde4c ap-southeast-2 ami-01fe8a7538500f24c ap-southeast-3 ami-051b3f67dd787d5e9 ap-southeast-4 ami-04d2e14a9eef40143 ca-central-1 ami-0f66973ff12d09356 ca-west-1 ami-0c9f3e2f0470d6d0b eu-central-1 ami-0a79af8849b425a8a eu-central-2 ami-0f9f66951c9709471 eu-north-1 ami-0670362aa7eb9032d eu-south-1 ami-031b24b970eae750b eu-south-2 ami-0734d2ed55c00a46c eu-west-1 ami-0a9af75c2649471c0 eu-west-2 ami-0b84155a3672ac44e eu-west-3 ami-02b51442c612818d4 il-central-1 ami-0d2c47a297d483ce4 me-central-1 ami-0ef3005246bd83b07 me-south-1 ami-0321ca1ee89015eda sa-east-1 ami-0e63f1103dc71d8ae us-east-1 ami-0404da96615c73bec us-east-2 ami-04c3bd7be936f728f us-gov-east-1 ami-0d30bc0b99b153247 us-gov-west-1 ami-0ee006f84d6aa5045 us-west-1 ami-061bfd61d5cfd7aa6 us-west-2 ami-05ffb8f6f18b8e3f8 4.3.8.1. AWS regions without a published RHCOS AMI You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs. A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file. 4.3.8.2. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.17.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 4.3.9. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 4.3.9.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 4.21. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 4.3.10. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 4.3.10.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 4.22. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.11. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 4.3.11.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 4.23. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 4.3.12. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. You can view details about the running instances that are created by using the AWS EC2 console . 4.3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.3.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 4.3.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m Configure the Operators that are not available. 4.3.15.1. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information. 4.3.15.1.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 4.3.15.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.3.16. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 4.3.17. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 4.3.18. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.3.19. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.3.20. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 4.3.21. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 4.4. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.17, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the AWS APIs. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 4.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You prepared the user-provisioned infrastructure. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . 4.4.2. About installations in restricted networks In OpenShift Container Platform 4.17, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 4.4.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 4.4.3. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 4.4.3.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 4.4.3.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS Region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- Add the image content resources: imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev Use the imageContentSources section from the output of the command to mirror the repository or the values that you used when you mirrored the content from the media that you brought into your restricted network. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 4.4.3.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.4.3.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Manually creating long-term credentials 4.4.4. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 4.4.5. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 4.4.5.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 4.24. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ ",", [ !Join ["=", [ !Select [0, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join ["=", [!Select [1, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable2]], !Ref "AWS::NoValue" ], !If [DoAz3, !Join ["=", [!Select [2, "Fn::GetAZs": !Ref "AWS::Region"], !Ref PrivateRouteTable3]], !Ref "AWS::NoValue" ] ] ] 4.4.6. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 4.4.6.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 4.25. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.11" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.11" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 4.4.7. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 4.4.7.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 4.26. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile 4.4.8. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 4.4.9. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 4.5. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-019b3e090bb062842 ap-east-1 ami-0cb76d97f77cda0a1 ap-northeast-1 ami-0d7d4b329e5403cfb ap-northeast-2 ami-02d3789d532feb517 ap-northeast-3 ami-08b82c4899109b707 ap-south-1 ami-0c184f8b5ad8af69d ap-south-2 ami-0b0525037b9a20e9a ap-southeast-1 ami-0dbee0006375139a7 ap-southeast-2 ami-043072b1af91be72f ap-southeast-3 ami-09d8bbf16b228139e ap-southeast-4 ami-01c6b29e9c57b434b ca-central-1 ami-06fda1fa0b65b864b ca-west-1 ami-0141eea486b5e2c43 eu-central-1 ami-0f407de515454fdd0 eu-central-2 ami-062cfad83bc7b71b8 eu-north-1 ami-0af77aba6aebb5086 eu-south-1 ami-04d9da83bc9f854fc eu-south-2 ami-035d487abf54f0af7 eu-west-1 ami-043dd3b788dbaeb1c eu-west-2 ami-0c7d0f90a4401b723 eu-west-3 ami-039baa878e1def55f il-central-1 ami-07d305bf03b2148de me-central-1 ami-0fc457e8897ccb41a me-south-1 ami-0af99a751cf682b90 sa-east-1 ami-04a7300f64ee01d68 us-east-1 ami-01b53f2824bf6d426 us-east-2 ami-0565349610e27bd41 us-gov-east-1 ami-0020504fa043fe41d us-gov-west-1 ami-036798bce4722d3c2 us-west-1 ami-0147c634ad692da52 us-west-2 ami-0c65d71e89d43aa90 Table 4.6. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-0e585ef53405bebf5 ap-east-1 ami-05f32f1715bb51bda ap-northeast-1 ami-05ecb62bab0c50e52 ap-northeast-2 ami-0a3ffb2c07c9e4a8d ap-northeast-3 ami-0ae6746ea17d1042c ap-south-1 ami-00deb5b08c86060b8 ap-south-2 ami-047a47d5049781e03 ap-southeast-1 ami-09cb598f0d36fde4c ap-southeast-2 ami-01fe8a7538500f24c ap-southeast-3 ami-051b3f67dd787d5e9 ap-southeast-4 ami-04d2e14a9eef40143 ca-central-1 ami-0f66973ff12d09356 ca-west-1 ami-0c9f3e2f0470d6d0b eu-central-1 ami-0a79af8849b425a8a eu-central-2 ami-0f9f66951c9709471 eu-north-1 ami-0670362aa7eb9032d eu-south-1 ami-031b24b970eae750b eu-south-2 ami-0734d2ed55c00a46c eu-west-1 ami-0a9af75c2649471c0 eu-west-2 ami-0b84155a3672ac44e eu-west-3 ami-02b51442c612818d4 il-central-1 ami-0d2c47a297d483ce4 me-central-1 ami-0ef3005246bd83b07 me-south-1 ami-0321ca1ee89015eda sa-east-1 ami-0e63f1103dc71d8ae us-east-1 ami-0404da96615c73bec us-east-2 ami-04c3bd7be936f728f us-gov-east-1 ami-0d30bc0b99b153247 us-gov-west-1 ami-0ee006f84d6aa5045 us-west-1 ami-061bfd61d5cfd7aa6 us-west-2 ami-05ffb8f6f18b8e3f8 4.4.10. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 4.4.10.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 4.27. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 4.4.11. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 4.4.11.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 4.28. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] 4.4.12. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 4.4.12.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 4.29. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp 4.4.13. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. 4.4.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 4.4.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m Configure the Operators that are not available. 4.4.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 4.4.15.2. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.4.15.2.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 4.4.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.4.16. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 4.4.17. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 4.4.18. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Register your cluster on the Cluster registration page. 4.4.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.4.20. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. Additional resources See About remote health monitoring for more information about the Telemetry service 4.4.21. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 4.4.22. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager in disconnected environments . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster If necessary, you can remove cloud provider credentials . 4.5. Installing a cluster with the support for configuring multi-architecture compute machines An OpenShift Container Platform cluster with multi-architecture compute machines supports compute machines with different architectures. Note When you have nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You must ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, see Scheduling workloads on clusters with multi-architecture compute machines . You can install an AWS cluster with the support for configuring multi-architecture compute machines. After installing the AWS cluster, you can add multi-architecture compute machines to the cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". 4.5.1. Installing a cluster with multi-architecture support You can install a cluster with the support for configuring multi-architecture compute machines. Prerequisites You installed the OpenShift CLI ( oc ). You have the OpenShift Container Platform installation program. You downloaded the pull secret for your cluster. Procedure Check that the openshift-install binary is using the multi payload by running the following command: USD ./openshift-install version Example output ./openshift-install 4.17.0 built from commit abc123etc release image quay.io/openshift-release-dev/ocp-release@sha256:abc123wxyzetc release architecture multi default architecture amd64 The output must contain release architecture multi to indicate that the openshift-install binary is using the multi payload. Update the install-config.yaml file to configure the architecture for the nodes. Sample install-config.yaml file with multi-architecture configuration apiVersion: v1 baseDomain: example.openshift.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: arm64 2 name: master platform: {} replicas: 3 # ... 1 Specify the architecture of the worker node. You can set this field to either arm64 or amd64 . 2 Specify the control plane node architecture. You can set this field to either arm64 or amd64 . steps Deploying the cluster Additional resources Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator
|
[
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.11\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.11\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"export AWS_PROFILE=<aws_profile> 1",
"export AWS_DEFAULT_REGION=<aws_region> 1",
"export RHCOS_VERSION=<version> 1",
"export VMIMPORT_BUCKET_NAME=<s3_bucket_name>",
"cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF",
"aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2",
"watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}",
"{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }",
"aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable PrivateRouteTableIds: Description: Private Route table IDs Value: !Join [ \",\", [ !Join [\"=\", [ !Select [0, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable ]], !If [DoAz2, !Join [\"=\", [!Select [1, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable2]], !Ref \"AWS::NoValue\" ], !If [DoAz3, !Join [\"=\", [!Select [2, \"Fn::GetAZs\": !Ref \"AWS::Region\"], !Ref PrivateRouteTable3]], !Ref \"AWS::NoValue\" ] ] ]",
"aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1",
"mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10",
"[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.11\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.11\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup",
"Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile",
"openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'",
"ami-0d3e625f84626bbda",
"openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'",
"ami-0af1d3b7fa5be2131",
"aws s3 mb s3://<cluster-name>-infra 1",
"aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1",
"aws s3 ls s3://<cluster-name>-infra/",
"2019-04-03 16:15:16 314878 bootstrap.ign",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]",
"[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]",
"aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3",
"arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59",
"aws cloudformation describe-stacks --stack-name <name>",
"AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"aws cloudformation delete-stack --stack-name <name> 1",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m",
"aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1",
"Z3AADJGX6KTTL2",
"aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text",
"/hostedzone/Z3URY6TWQ91KVV",
"aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"./openshift-install version",
"./openshift-install 4.17.0 built from commit abc123etc release image quay.io/openshift-release-dev/ocp-release@sha256:abc123wxyzetc release architecture multi default architecture amd64",
"apiVersion: v1 baseDomain: example.openshift.com compute: - architecture: amd64 1 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: arm64 2 name: master platform: {} replicas: 3"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_aws/user-provisioned-infrastructure
|
Chapter 23. Viewing and Managing Log Files
|
Chapter 23. Viewing and Managing Log Files Log files are files that contain messages about the system, including the kernel, services, and applications running on it. There are different log files for different information. For example, there is a default system log file, a log file just for security messages, and a log file for cron tasks. Log files can be very useful when trying to troubleshoot a problem with the system such as trying to load a kernel driver or when looking for unauthorized login attempts to the system. This chapter discusses where to find log files, how to view log files, and what to look for in log files. Some log files are controlled by a daemon called rsyslogd . The rsyslogd daemon is an enhanced replacement for sysklogd , and provides extended filtering, encryption protected relaying of messages, various configuration options, input and output modules, support for transportation via the TCP or UDP protocols. Note that rsyslog is compatible with sysklogd . Log files can also be managed by the journald daemon - a component of systemd . The journald daemon captures Syslog messages, kernel log messages, initial RAM disk and early boot messages as well as messages written to standard output and standard error output of all services, indexes them and makes this available to the user. The native journal file format, which is a structured and indexed binary file, improves searching and provides faster operation, and it also stores meta data information like time stamps or user IDs. Log files produced by journald are by default not persistent, log files are stored only in memory or a small ring-buffer in the /run/log/journal/ directory. The amount of logged data depends on free memory, when you reach the capacity limit, the oldest entries are deleted. However, this setting can be altered - see Section 23.10.5, "Enabling Persistent Storage" . For more information on Journal see Section 23.10, "Using the Journal" . By default, these two logging tools coexist on your system. The journald daemon is the primary tool for troubleshooting. It also provides additional data necessary for creating structured log messages. Data acquired by journald is forwarded into the /run/systemd/journal/syslog socket that may be used by rsyslogd to process the data further. However, rsyslog does the actual integration by default via the imjournal input module, thus avoiding the aforementioned socket. You can also transfer data in the opposite direction, from rsyslogd to journald with use of omjournal module. See Section 23.7, "Interaction of Rsyslog and Journal" for further information. The integration enables maintaining text-based logs in a consistent format to ensure compatibility with possible applications or configurations dependent on rsyslogd . Also, you can maintain rsyslog messages in a structured format (see Section 23.8, "Structured Logging with Rsyslog" ). 23.1. Locating Log Files A list of log files maintained by rsyslogd can be found in the /etc/rsyslog.conf configuration file. Most log files are located in the /var/log/ directory. Some applications such as httpd and samba have a directory within /var/log/ for their log files. You may notice multiple files in the /var/log/ directory with numbers after them (for example, cron-20100906 ). These numbers represent a time stamp that has been added to a rotated log file. Log files are rotated so their file sizes do not become too large. The logrotate package contains a cron task that automatically rotates log files according to the /etc/logrotate.conf configuration file and the configuration files in the /etc/logrotate.d/ directory. 23.2. Basic Configuration of Rsyslog The main configuration file for rsyslog is /etc/rsyslog.conf . Here, you can specify global directives , modules , and rules that consist of filter and action parts. Also, you can add comments in the form of text following a hash sign ( # ). 23.2.1. Filters A rule is specified by a filter part, which selects a subset of syslog messages, and an action part, which specifies what to do with the selected messages. To define a rule in your /etc/rsyslog.conf configuration file, define both, a filter and an action, on one line and separate them with one or more spaces or tabs. rsyslog offers various ways to filter syslog messages according to selected properties. The available filtering methods can be divided into Facility/Priority-based , Property-based , and Expression-based filters. Facility/Priority-based filters The most used and well-known way to filter syslog messages is to use the facility/priority-based filters which filter syslog messages based on two conditions: facility and priority separated by a dot. To create a selector, use the following syntax: where: FACILITY specifies the subsystem that produces a specific syslog message. For example, the mail subsystem handles all mail-related syslog messages. FACILITY can be represented by one of the following keywords (or by a numerical code): kern (0), user (1), mail (2), daemon (3), auth (4), syslog (5), lpr (6), news (7), cron (8), authpriv (9), ftp (10), and local0 through local7 (16 - 23). PRIORITY specifies a priority of a syslog message. PRIORITY can be represented by one of the following keywords (or by a number): debug (7), info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0). The aforementioned syntax selects syslog messages with the defined or higher priority. By preceding any priority keyword with an equal sign ( = ), you specify that only syslog messages with the specified priority will be selected. All other priorities will be ignored. Conversely, preceding a priority keyword with an exclamation mark ( ! ) selects all syslog messages except those with the defined priority. In addition to the keywords specified above, you may also use an asterisk ( * ) to define all facilities or priorities (depending on where you place the asterisk, before or after the comma). Specifying the priority keyword none serves for facilities with no given priorities. Both facility and priority conditions are case-insensitive. To define multiple facilities and priorities, separate them with a comma ( , ). To define multiple selectors on one line, separate them with a semi-colon ( ; ). Note that each selector in the selector field is capable of overwriting the preceding ones, which can exclude some priorities from the pattern. Example 23.1. Facility/Priority-based Filters The following are a few examples of simple facility/priority-based filters that can be specified in /etc/rsyslog.conf . To select all kernel syslog messages with any priority, add the following text into the configuration file: To select all mail syslog messages with priority crit and higher, use this form: To select all cron syslog messages except those with the info or debug priority, set the configuration in the following form: Property-based filters Property-based filters let you filter syslog messages by any property, such as timegenerated or syslogtag . For more information on properties, see the section called "Properties" . You can compare each of the specified properties to a particular value using one of the compare-operations listed in Table 23.1, "Property-based compare-operations" . Both property names and compare operations are case-sensitive. Property-based filter must start with a colon ( : ). To define the filter, use the following syntax: where: The PROPERTY attribute specifies the desired property. The optional exclamation point ( ! ) negates the output of the compare-operation. Other Boolean operators are currently not supported in property-based filters. The COMPARE_OPERATION attribute specifies one of the compare-operations listed in Table 23.1, "Property-based compare-operations" . The STRING attribute specifies the value that the text provided by the property is compared to. This value must be enclosed in quotation marks. To escape certain character inside the string (for example a quotation mark ( " )), use the backslash character ( \ ). Table 23.1. Property-based compare-operations Compare-operation Description contains Checks whether the provided string matches any part of the text provided by the property. isequal Compares the provided string against all of the text provided by the property. These two values must be exactly equal to match. startswith Checks whether the provided string is found exactly at the beginning of the text provided by the property. regex Compares the provided POSIX BRE (Basic Regular Expression) against the text provided by the property. ereregex Compares the provided POSIX ERE (Extended Regular Expression) regular expression against the text provided by the property. isempty Checks if the property is empty. The value is discarded. This is especially useful when working with normalized data, where some fields may be populated based on normalization result. Example 23.2. Property-based Filters The following are a few examples of property-based filters that can be specified in /etc/rsyslog.conf . To select syslog messages which contain the string error in their message text, use: The following filter selects syslog messages received from the host name host1 : To select syslog messages which do not contain any mention of the words fatal and error with any or no text between them (for example, fatal lib error ), type: Expression-based filters Expression-based filters select syslog messages according to defined arithmetic, Boolean or string operations. Expression-based filters use rsyslog 's own scripting language called RainerScript to build complex filters. The basic syntax of expression-based filter looks as follows: where: The EXPRESSION attribute represents an expression to be evaluated, for example: USDmsg startswith 'DEVNAME' or USDsyslogfacility-text == 'local0' . You can specify more than one expression in a single filter by using and and or operators. Note that rsyslog supports case-insensitive comparisons in expression-based filters. You can use contains_i or startswith_i compare-operations within the EXPRESSION attribute, for example: if USDhostname startswith_i "<HOST_NAME>" then ACTION . The ACTION attribute represents an action to be performed if the expression returns the value true . This can be a single action, or an arbitrary complex script enclosed in curly braces. Expression-based filters are indicated by the keyword if at the start of a new line. The then keyword separates the EXPRESSION from the ACTION . Optionally, you can employ the else keyword to specify what action is to be performed in case the condition is not met. With expression-based filters, you can nest the conditions by using a script enclosed in curly braces as in Example 23.3, "Expression-based Filters" . The script allows you to use facility/priority-based filters inside the expression. On the other hand, property-based filters are not recommended here. RainerScript supports regular expressions with specialized functions re_match() and re_extract() . Example 23.3. Expression-based Filters The following expression contains two nested conditions. The log files created by a program called prog1 are split into two files based on the presence of the "test" string in the message. See the section called "Online Documentation" for more examples of various expression-based filters. RainerScript is the basis for rsyslog 's new configuration format, see Section 23.3, "Using the New Configuration Format" 23.2.2. Actions Actions specify what is to be done with the messages filtered out by an already defined selector. The following are some of the actions you can define in your rule: Saving syslog messages to log files The majority of actions specify to which log file a syslog message is saved. This is done by specifying a file path after your already-defined selector: where FILTER stands for user-specified selector and PATH is a path of a target file. For instance, the following rule is comprised of a selector that selects all cron syslog messages and an action that saves them into the /var/log/cron.log log file: By default, the log file is synchronized every time a syslog message is generated. Use a dash mark ( - ) as a prefix of the file path you specified to omit syncing: Note that you might lose information if the system terminates right after a write attempt. However, this setting can improve performance, especially if you run programs that produce very verbose log messages. Your specified file path can be either static or dynamic . Static files are represented by a fixed file path as shown in the example above. Dynamic file paths can differ according to the received message. Dynamic file paths are represented by a template and a question mark ( ? ) prefix: where DynamicFile is a name of a predefined template that modifies output paths. You can use the dash prefix ( - ) to disable syncing, also you can use multiple templates separated by a colon ( ; ). For more information on templates, see the section called "Generating Dynamic File Names" . If the file you specified is an existing terminal or /dev/console device, syslog messages are sent to standard output (using special terminal -handling) or your console (using special /dev/console -handling) when using the X Window System, respectively. Sending syslog messages over the network rsyslog allows you to send and receive syslog messages over the network. This feature allows you to administer syslog messages of multiple hosts on one machine. To forward syslog messages to a remote machine, use the following syntax: where: The at sign ( @ ) indicates that the syslog messages are forwarded to a host using the UDP protocol. To use the TCP protocol, use two at signs with no space between them ( @@ ). The optional z NUMBER setting enables zlib compression for syslog messages. The NUMBER attribute specifies the level of compression (from 1 - lowest to 9 - maximum). Compression gain is automatically checked by rsyslogd , messages are compressed only if there is any compression gain and messages below 60 bytes are never compressed. The HOST attribute specifies the host which receives the selected syslog messages. The PORT attribute specifies the host machine's port. When specifying an IPv6 address as the host, enclose the address in square brackets ( [ , ] ). Example 23.4. Sending syslog Messages over the Network The following are some examples of actions that forward syslog messages over the network (note that all actions are preceded with a selector that selects all messages with any priority). To forward messages to 192.168.0.1 via the UDP protocol, type: To forward messages to "example.com" using port 6514 and the TCP protocol, use: The following compresses messages with zlib (level 9 compression) and forwards them to 2001:db8::1 using the UDP protocol Output channels Output channels are primarily used to specify the maximum size a log file can grow to. This is very useful for log file rotation (for more information see Section 23.2.5, "Log Rotation" ). An output channel is basically a collection of information about the output action. Output channels are defined by the USDoutchannel directive. To define an output channel in /etc/rsyslog.conf , use the following syntax: where: The NAME attribute specifies the name of the output channel. The FILE_NAME attribute specifies the name of the output file. Output channels can write only into files, not pipes, terminal, or other kind of output. The MAX_SIZE attribute represents the maximum size the specified file (in FILE_NAME ) can grow to. This value is specified in bytes . The ACTION attribute specifies the action that is taken when the maximum size, defined in MAX_SIZE , is hit. To use the defined output channel as an action inside a rule, type: Example 23.5. Output channel log rotation The following output shows a simple log rotation through the use of an output channel. First, the output channel is defined via the USDoutchannel directive: and then it is used in a rule that selects every syslog message with any priority and executes the previously-defined output channel on the acquired syslog messages: Once the limit (in the example 100 MB) is hit, the /home/joe/log_rotation_script is executed. This script can contain anything from moving the file into a different folder, editing specific content out of it, or simply removing it. Sending syslog messages to specific users rsyslog can send syslog messages to specific users by specifying a user name of the user you want to send the messages to (as in Example 23.7, "Specifying Multiple Actions" ). To specify more than one user, separate each user name with a comma ( , ). To send messages to every user that is currently logged on, use an asterisk ( * ). Executing a program rsyslog lets you execute a program for selected syslog messages and uses the system() call to execute the program in shell. To specify a program to be executed, prefix it with a caret character ( ^ ). Consequently, specify a template that formats the received message and passes it to the specified executable as a one line parameter (for more information on templates, see Section 23.2.3, "Templates" ). Here an output of the FILTER condition is processed by a program represented by EXECUTABLE . This program can be any valid executable. Replace TEMPLATE with the name of the formatting template. Example 23.6. Executing a Program In the following example, any syslog message with any priority is selected, formatted with the template template and passed as a parameter to the test-program program, which is then executed with the provided parameter: Warning When accepting messages from any host, and using the shell execute action, you may be vulnerable to command injection. An attacker may try to inject and execute commands in the program you specified to be executed in your action. To avoid any possible security threats, thoroughly consider the use of the shell execute action. Storing syslog messages in a database Selected syslog messages can be directly written into a database table using the database writer action. The database writer uses the following syntax: where: The PLUGIN calls the specified plug-in that handles the database writing (for example, the ommysql plug-in). The DB_HOST attribute specifies the database host name. The DB_NAME attribute specifies the name of the database. The DB_USER attribute specifies the database user. The DB_PASSWORD attribute specifies the password used with the aforementioned database user. The TEMPLATE attribute specifies an optional use of a template that modifies the syslog message. For more information on templates, see Section 23.2.3, "Templates" . Important Currently, rsyslog provides support for MySQL and PostgreSQL databases only. In order to use the MySQL and PostgreSQL database writer functionality, install the rsyslog-mysql and rsyslog-pgsql packages, respectively. Also, make sure you load the appropriate modules in your /etc/rsyslog.conf configuration file: For more information on rsyslog modules, see Section 23.6, "Using Rsyslog Modules" . Alternatively, you may use a generic database interface provided by the omlibdb module (supports: Firebird/Interbase, MS SQL, Sybase, SQLLite, Ingres, Oracle, mSQL). Discarding syslog messages To discard your selected messages, use stop . The discard action is mostly used to filter out messages before carrying on any further processing. It can be effective if you want to omit some repeating messages that would otherwise fill the log files. The results of discard action depend on where in the configuration file it is specified, for the best results place these actions on top of the actions list. Please note that once a message has been discarded there is no way to retrieve it in later configuration file lines. For instance, the following rule discards all messages that matches the local5.* filter: In the following example, any cron syslog messages are discarded: Note With versions prior to rsyslog 7, the tilde character ( ~ ) was used instead of stop to discard syslog messages. Specifying Multiple Actions For each selector, you are allowed to specify multiple actions. To specify multiple actions for one selector, write each action on a separate line and precede it with an ampersand (&) character: Specifying multiple actions improves the overall performance of the desired outcome since the specified selector has to be evaluated only once. Example 23.7. Specifying Multiple Actions In the following example, all kernel syslog messages with the critical priority ( crit ) are sent to user user1 , processed by the template temp and passed on to the test-program executable, and forwarded to 192.168.0.1 via the UDP protocol. Any action can be followed by a template that formats the message. To specify a template, suffix an action with a semicolon ( ; ) and specify the name of the template. For more information on templates, see Section 23.2.3, "Templates" . Warning A template must be defined before it is used in an action, otherwise it is ignored. In other words, template definitions should always precede rule definitions in /etc/rsyslog.conf . 23.2.3. Templates Any output that is generated by rsyslog can be modified and formatted according to your needs with the use of templates . To create a template use the following syntax in /etc/rsyslog.conf : where: template() is the directive introducing block defining a template. The TEMPLATE_NAME mandatory argument is used to refer to the template. Note that TEMPLATE_NAME should be unique. The type mandatory argument can acquire one of these values: "list", "subtree", "string" or "plugin". The string argument is the actual template text. Within this text, special characters, such as \n for newline or \r for carriage return, can be used. Other characters, such as % or ", have to be escaped if you want to use those characters literally. Within this text, special characters, such as \n for new line or \r for carriage return, can be used. Other characters, such as % or " , have to be escaped if you want to use those characters literally. The text specified between two percent signs ( % ) specifies a property that allows you to access specific contents of a syslog message. For more information on properties, see the section called "Properties" . The OPTION attribute specifies any options that modify the template functionality. The currently supported template options are sql and stdsql , which are used for formatting the text as an SQL query, or json which formats text to be suitable for JSON processing, and casesensitive which sets case sensitiveness of property names. Note Note that the database writer checks whether the sql or stdsql options are specified in the template. If they are not, the database writer does not perform any action. This is to prevent any possible security threats, such as SQL injection. See section Storing syslog messages in a database in Section 23.2.2, "Actions" for more information. Generating Dynamic File Names Templates can be used to generate dynamic file names. By specifying a property as a part of the file path, a new file will be created for each unique property, which is a convenient way to classify syslog messages. For example, use the timegenerated property, which extracts a time stamp from the message, to generate a unique file name for each syslog message: Keep in mind that the USDtemplate directive only specifies the template. You must use it inside a rule for it to take effect. In /etc/rsyslog.conf , use the question mark ( ? ) in an action definition to mark the dynamic file name template: Properties Properties defined inside a template (between two percent signs ( % )) enable access various contents of a syslog message through the use of a property replacer . To define a property inside a template (between the two quotation marks ( "..." )), use the following syntax: where: The PROPERTY_NAME attribute specifies the name of a property. A list of all available properties and their detailed description can be found in the rsyslog.conf(5) manual page under the section Available Properties . FROM_CHAR and TO_CHAR attributes denote a range of characters that the specified property will act upon. Alternatively, regular expressions can be used to specify a range of characters. To do so, set the letter R as the FROM_CHAR attribute and specify your desired regular expression as the TO_CHAR attribute. The OPTION attribute specifies any property options, such as the lowercase option to convert the input to lowercase. A list of all available property options and their detailed description can be found in the rsyslog.conf(5) manual page under the section Property Options . The following are some examples of simple properties: The following property obtains the whole message text of a syslog message: The following property obtains the first two characters of the message text of a syslog message: The following property obtains the whole message text of a syslog message and drops its last line feed character: The following property obtains the first 10 characters of the time stamp that is generated when the syslog message is received and formats it according to the RFC 3999 date standard. Template Examples This section presents a few examples of rsyslog templates. Example 23.8, "A verbose syslog message template" shows a template that formats a syslog message so that it outputs the message's severity, facility, the time stamp of when the message was received, the host name, the message tag, the message text, and ends with a new line. Example 23.8. A verbose syslog message template Example 23.9, "A wall message template" shows a template that resembles a traditional wall message (a message that is send to every user that is logged in and has their mesg(1) permission set to yes ). This template outputs the message text, along with a host name, message tag and a time stamp, on a new line (using \r and \n ) and rings the bell (using \7 ). Example 23.9. A wall message template Example 23.10, "A database formatted message template" shows a template that formats a syslog message so that it can be used as a database query. Notice the use of the sql option at the end of the template specified as the template option. It tells the database writer to format the message as an MySQL SQL query. Example 23.10. A database formatted message template rsyslog also contains a set of predefined templates identified by the RSYSLOG_ prefix. These are reserved for the syslog's use and it is advisable to not create a template using this prefix to avoid conflicts. The following list shows these predefined templates along with their definitions. RSYSLOG_DebugFormat A special format used for troubleshooting property problems. RSYSLOG_SyslogProtocol23Format The format specified in IETF's internet-draft ietf-syslog-protocol-23, which is assumed to become the new syslog standard RFC. RSYSLOG_FileFormat A modern-style logfile format similar to TraditionalFileFormat, but with high-precision time stamps and time zone information. RSYSLOG_TraditionalFileFormat The older default log file format with low-precision time stamps. RSYSLOG_ForwardFormat A forwarding format with high-precision time stamps and time zone information. RSYSLOG_TraditionalForwardFormat The traditional forwarding format with low-precision time stamps. 23.2.4. Global Directives Global directives are configuration options that apply to the rsyslogd daemon. They usually specify a value for a specific predefined variable that affects the behavior of the rsyslogd daemon or a rule that follows. All of the global directives are enclosed in a global configuration block. The following is an example of a global directive that specifies overriding local host name for log messages: You can define multiple directives in your /etc/rsyslog.conf configuration file. A directive affects the behavior of all configuration options until another occurrence of that same directive is detected. Global directives can be used to configure actions, queues and for debugging. A comprehensive list of all available configuration directives can be found in the section called "Online Documentation" . Currently, a new configuration format has been developed that replaces the USD-based syntax (see Section 23.3, "Using the New Configuration Format" ). However, classic global directives remain supported as a legacy format. 23.2.5. Log Rotation The following is a sample /etc/logrotate.conf configuration file: All of the lines in the sample configuration file define global options that apply to every log file. In our example, log files are rotated weekly, rotated log files are kept for four weeks, and all rotated log files are compressed by gzip into the .gz format. Any lines that begin with a hash sign (#) are comments and are not processed. You may define configuration options for a specific log file and place it under the global options. However, it is advisable to create a separate configuration file for any specific log file in the /etc/logrotate.d/ directory and define any configuration options there. The following is an example of a configuration file placed in the /etc/logrotate.d/ directory: The configuration options in this file are specific for the /var/log/messages log file only. The settings specified here override the global settings where possible. Thus the rotated /var/log/messages log file will be kept for five weeks instead of four weeks as was defined in the global options. The following is a list of some of the directives you can specify in your logrotate configuration file: weekly - Specifies the rotation of log files to be done weekly. Similar directives include: daily monthly yearly compress - Enables compression of rotated log files. Similar directives include: nocompress compresscmd - Specifies the command to be used for compressing. uncompresscmd compressext - Specifies what extension is to be used for compressing. compressoptions - Specifies any options to be passed to the compression program used. delaycompress - Postpones the compression of log files to the rotation of log files. rotate INTEGER - Specifies the number of rotations a log file undergoes before it is removed or mailed to a specific address. If the value 0 is specified, old log files are removed instead of rotated. mail ADDRESS - This option enables mailing of log files that have been rotated as many times as is defined by the rotate directive to the specified address. Similar directives include: nomail mailfirst - Specifies that the just-rotated log files are to be mailed, instead of the about-to-expire log files. maillast - Specifies that the about-to-expire log files are to be mailed, instead of the just-rotated log files. This is the default option when mail is enabled. For the full list of directives and various configuration options, see the logrotate(5) manual page. 23.2.6. Increasing the Limit of Open Files Under certain circumstances the rsyslog exceeds the limit for a maximum number of open files. Consequently, the rsyslog cannot open new files. To increase the limit of open files in the rsyslog : Create the /etc/systemd/system/rsylog.service.d/increase_nofile_limit.conf file with the following content: 23.3. Using the New Configuration Format In rsyslog version 7, installed by default for Red Hat Enterprise Linux 7 in the rsyslog package, a new configuration syntax is introduced. This new configuration format aims to be more powerful, more intuitive, and to prevent common mistakes by not permitting certain invalid constructs. The syntax enhancement is enabled by the new configuration processor that relies on RainerScript. The legacy format is still fully supported and it is used by default in the /etc/rsyslog.conf configuration file. RainerScript is a scripting language designed for processing network events and configuring event processors such as rsyslog . RainerScript was first used to define expression-based filters, see Example 23.3, "Expression-based Filters" . The version of RainerScript in rsyslog version 7 implements the input() and ruleset() statements, which permit the /etc/rsyslog.conf configuration file to be written in the new syntax. The new syntax differs mainly in that it is much more structured; parameters are passed as arguments to statements, such as input, action, template, and module load. The scope of options is limited by blocks. This enhances readability and reduces the number of bugs caused by misconfiguration. There is also a significant performance gain. Some functionality is exposed in both syntaxes, some only in the new one. Compare the configuration written with legacy-style parameters: and the same configuration with the use of the new format statement: This significantly reduces the number of parameters used in configuration, improves readability, and also provides higher execution speed. For more information on RainerScript statements and parameters see the section called "Online Documentation" . 23.3.1. Rulesets Leaving special directives aside, rsyslog handles messages as defined by rules that consist of a filter condition and an action to be performed if the condition is true. With a traditionally written /etc/rsyslog.conf file, all rules are evaluated in order of appearance for every input message. This process starts with the first rule and continues until all rules have been processed or until the message is discarded by one of the rules. However, rules can be grouped into sequences called rulesets . With rulesets, you can limit the effect of certain rules only to selected inputs or enhance the performance of rsyslog by defining a distinct set of actions bound to a specific input. In other words, filter conditions that will be inevitably evaluated as false for certain types of messages can be skipped. The legacy ruleset definition in /etc/rsyslog.conf can look as follows: The rule ends when another rule is defined, or the default ruleset is called as follows: With the new configuration format in rsyslog 7, the input() and ruleset() statements are reserved for this operation. The new format ruleset definition in /etc/rsyslog.conf can look as follows: Replace rulesetname with an identifier for your ruleset. The ruleset name cannot start with RSYSLOG_ since this namespace is reserved for use by rsyslog . RSYSLOG_DefaultRuleset then defines the default set of rules to be performed if the message has no other ruleset assigned. With rule and rule2 you can define rules in filter-action format mentioned above. With the call parameter, you can nest rulesets by calling them from inside other ruleset blocks. After creating a ruleset, you need to specify what input it will apply to: Here you can identify an input message by input_type , which is an input module that gathered the message, or by port_num - the port number. Other parameters such as file or tag can be specified for input() . Replace rulesetname with a name of the ruleset to be evaluated against the message. In case an input message is not explicitly bound to a ruleset, the default ruleset is triggered. You can also use the legacy format to define rulesets, for more information see the section called "Online Documentation" . Example 23.11. Using rulesets The following rulesets ensure different handling of remote messages coming from different ports. Add the following into /etc/rsyslog.conf : Rulesets shown in the above example define log destinations for the remote input from two ports, in case of port 601 , messages are sorted according to the facility. Then, the TCP input is enabled and bound to rulesets. Note that you must load the required modules (imtcp) for this configuration to work. 23.3.2. Compatibility with sysklogd The compatibility mode specified via the -c option exists in rsyslog version 5 but not in version 7. Also, the sysklogd-style command-line options are deprecated and configuring rsyslog through these command-line options should be avoided. However, you can use several templates and directives to configure rsyslogd to emulate sysklogd-like behavior. For more information on various rsyslogd options, see the rsyslogd(8) manual page. 23.4. Working with Queues in Rsyslog Queues are used to pass content, mostly syslog messages, between components of rsyslog . With queues, rsyslog is capable of processing multiple messages simultaneously and to apply several actions to a single message at once. The data flow inside rsyslog can be illustrated as follows: Figure 23.1. Message Flow in Rsyslog Whenever rsyslog receives a message, it passes this message to the preprocessor and then places it into the main message queue . Messages wait there to be dequeued and passed to the rule processor . The rule processor is a parsing and filtering engine. Here, the rules defined in /etc/rsyslog.conf are applied. Based on these rules, the rule processor evaluates which actions are to be performed. Each action has its own action queue. Messages are passed through this queue to the respective action processor which creates the final output. Note that at this point, several actions can run simultaneously on one message. For this purpose, a message is duplicated and passed to multiple action processors. Only one queue per action is possible. Depending on configuration, the messages can be sent right to the action processor without action queuing. This is the behavior of direct queues (see below). In case the output action fails, the action processor notifies the action queue, which then takes an unprocessed element back and after some time interval, the action is attempted again. To sum up, there are two positions where queues stand in rsyslog : either in front of the rule processor as a single main message queue or in front of various types of output actions as action queues . Queues provide two main advantages that both lead to increased performance of message processing: they serve as buffers that decouple producers and consumers in the structure of rsyslog they allow for parallelization of actions performed on messages Apart from this, queues can be configured with several directives to provide optimal performance for your system. These configuration options are covered in the following sections. Warning If an output plug-in is unable to deliver a message, it is stored in the preceding message queue. If the queue fills, the inputs block until it is no longer full. This will prevent new messages from being logged via the blocked queue. In the absence of separate action queues this can have severe consequences, such as preventing SSH logging, which in turn can prevent SSH access. Therefore it is advised to use dedicated action queues for outputs which are forwarded over a network or to a database. 23.4.1. Defining Queues Based on where the messages are stored, there are several types of queues: direct , in-memory , disk , and disk-assisted in-memory queues that are most widely used. You can choose one of these types for the main message queue and also for action queues. Add the following into /etc/rsyslog.conf : By adding this you can apply the setting for: main message queue: replace object with main_queue an action queue: replace object with action ruleset: replace object with ruleset Replace queue_type with one of direct , linkedlist or fixedarray (which are in-memory queues), or disk . The default setting for a main message queue is the FixedArray queue with a limit of 10,000 messages. Action queues are by default set as Direct queues. Direct Queues For many simple operations, such as when writing output to a local file, building a queue in front of an action is not needed. To avoid queuing, use: Replace object with main_queue , action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. With direct queue, messages are passed directly and immediately from the producer to the consumer. Disk Queues Disk queues store messages strictly on a hard drive, which makes them highly reliable but also the slowest of all possible queuing modes. This mode can be used to prevent the loss of highly important log data. However, disk queues are not recommended in most use cases. To set a disk queue, type the following into /etc/rsyslog.conf : Replace object with main_queue , action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. The default size of a queue can be modified with the following configuration directive: where size represents the specified size of disk queue part. The defined size limit is not restrictive, rsyslog always writes one complete queue entry, even if it violates the size limit. Each part of a disk queue matches with an individual file. The naming directive for these files looks as follows: This sets a name prefix for the file followed by a 7-digit number starting at one and incremented for each file. Disk queues are written in parts, with a default size 1 MB. Specify size to use a different value. In-memory Queues With in-memory queue, the enqueued messages are held in memory which makes the process very fast. The queued data is lost if the computer is power cycled or shut down. However, you can use the action (queue.saveonshutdown="on") setting to save the data before shutdown. There are two types of in-memory queues: FixedArray queue - the default mode for the main message queue, with a limit of 10,000 elements. This type of queue uses a fixed, pre-allocated array that holds pointers to queue elements. Due to these pointers, even if the queue is empty a certain amount of memory is consumed. However, FixedArray offers the best run time performance and is optimal when you expect a relatively low number of queued messages and high performance. LinkedList queue - here, all structures are dynamically allocated in a linked list, thus the memory is allocated only when needed. LinkedList queues handle occasional message bursts very well. In general, use LinkedList queues when in doubt. Compared to FixedArray, it consumes less memory and lowers the processing overhead. Use the following syntax to configure in-memory queues: Replace object with main_queue , action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. Disk-Assisted In-memory Queues Both disk and in-memory queues have their advantages and rsyslog lets you combine them in disk-assisted in-memory queues . To do so, configure a normal in-memory queue and then add the queue.filename="file_name" directive to its block to define a file name for disk assistance. This queue then becomes disk-assisted , which means it couples an in-memory queue with a disk queue to work in tandem. The disk queue is activated if the in-memory queue is full or needs to persist after shutdown. With a disk-assisted queue, you can set both disk-specific and in-memory specific configuration parameters. This type of queue is probably the most commonly used, it is especially useful for potentially long-running and unreliable actions. To specify the functioning of a disk-assisted in-memory queue, use the so-called watermarks: Replace object with main_queue , action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. Replace number with a number of enqueued messages. When an in-memory queue reaches the number defined by the high watermark, it starts writing messages to disk and continues until the in-memory queue size drops to the number defined with the low watermark. Correctly set watermarks minimize unnecessary disk writes, but also leave memory space for message bursts since writing to disk files is rather lengthy. Therefore, the high watermark must be lower than the whole queue capacity set with queue.size . The difference between the high watermark and the overall queue size is a spare memory buffer reserved for message bursts. On the other hand, setting the high watermark too low will turn on disk assistance unnecessarily often. Example 23.12. Reliable Forwarding of Log Messages to a Server Rsyslog is often used to maintain a centralized logging system, where log messages are forwarded to a server over the network. To avoid message loss when the server is not available, it is advisable to configure an action queue for the forwarding action. This way, messages that failed to be sent are stored locally until the server is reachable again. Note that such queues are not configurable for connections using the UDP protocol. Forwarding To a Single Server Suppose the task is to forward log messages from the system to a server with host name example.com , and to configure an action queue to buffer the messages in case of a server outage. To do so, perform the following steps: Use the following configuration in /etc/rsyslog.conf or create a file with the following content in the /etc/rsyslog.d/ directory: Where: queue.type enables a LinkedList in-memory queue, queue.filename defines a disk storage, in this case the backup files are created in the /var/lib/rsyslog/ directory with the example_fwd prefix, the action.resumeRetryCount= "-1" setting prevents rsyslog from dropping messages when retrying to connect if server is not responding, enabled queue.saveonshutdown saves in-memory data if rsyslog shuts down, the last line forwards all received messages to the logging server using reliable TCP delivery, port specification is optional. With the above configuration, rsyslog keeps messages in memory if the remote server is not reachable. A file on disk is created only if rsyslog runs out of the configured memory queue space or needs to shut down, which benefits the system performance. Forwarding To Multiple Servers The process of forwarding log messages to multiple servers is similar to the procedure: Each destination server requires a separate forwarding rule, action queue specification, and backup file on disk. For example, use the following configuration in /etc/rsyslog.conf or create a file with the following content in the /etc/rsyslog.d/ directory: 23.4.2. Creating a New Directory for rsyslog Log Files Rsyslog runs as the syslogd daemon and is managed by SELinux. Therefore all files to which rsyslog is required to write to, must have the appropriate SELinux file context. Creating a New Working Directory If required to use a different directory to store working files, create a directory as follows: Install utilities to manage SELinux policy: Set the SELinux directory context type to be the same as the /var/lib/rsyslog/ directory: Apply the SELinux context: If required, check the SELinux context as follows: Create subdirectories as required. For example: The subdirectories will be created with the same SELinux context as the parent directory. Add the following line in /etc/rsyslog.conf immediately before it is required to take effect: This setting will remain in effect until the WorkDirectory directive is encountered while parsing the configuration files. 23.4.3. Managing Queues All types of queues can be further configured to match your requirements. You can use several directives to modify both action queues and the main message queue. Currently, there are more than 20 queue parameters available, see the section called "Online Documentation" . Some of these settings are used commonly, others, such as worker thread management, provide closer control over the queue behavior and are reserved for advanced users. With advanced settings, you can optimize rsyslog 's performance, schedule queuing, or modify the behavior of a queue on system shutdown. Limiting Queue Size You can limit the number of messages that queue can contain with the following setting: Replace object with main_queue , action or ruleset to use this option to the main message queue, an action queue or for the ruleset respectively. Replace number with a number of enqueued messages. You can set the queue size only as the number of messages, not as their actual memory size. The default queue size is 10,000 messages for the main message queue and ruleset queues, and 1000 for action queues. Disk assisted queues are unlimited by default and cannot be restricted with this directive, but you can reserve them physical disk space in bytes with the following settings: Replace object with main_queue , action or ruleset . When the size limit specified by number is hit, messages are discarded until sufficient amount of space is freed by dequeued messages. Discarding Messages When a queue reaches a certain number of messages, you can discard less important messages in order to save space in the queue for entries of higher priority. The threshold that launches the discarding process can be set with the so-called discard mark : Replace object with MainMsg or with Action to use this option to the main message queue or for an action queue respectively. Here, number stands for a number of messages that have to be in the queue to start the discarding process. To define which messages to discard, use: Replace number with one of the following numbers for respective priorities: 7 (debug), 6 (info), 5 (notice), 4 (warning), 3 (err), 2 (crit), 1 (alert), or 0 (emerg). With this setting, both newly incoming and already queued messages with lower than defined priority are erased from the queue immediately after the discard mark is reached. Using Timeframes You can configure rsyslog to process queues during a specific time period. With this option you can, for example, transfer some processing into off-peak hours. To define a time frame, use the following syntax: With hour you can specify hours that bound your time frame. Use the 24-hour format without minutes. Configuring Worker Threads A worker thread performs a specified action on the enqueued message. For example, in the main message queue, a worker task is to apply filter logic to each incoming message and enqueue them to the relevant action queues. When a message arrives, a worker thread is started automatically. When the number of messages reaches a certain number, another worker thread is turned on. To specify this number, use: Replace number with a number of messages that will trigger a supplemental worker thread. For example, with number set to 100, a new worker thread is started when more than 100 messages arrive. When more than 200 messages arrive, the third worker thread starts and so on. However, too many working threads running in parallel becomes ineffective, so you can limit the maximum number of them by using: where number stands for a maximum number of working threads that can run in parallel. For the main message queue, the default limit is 1 thread. Once a working thread has been started, it keeps running until an inactivity timeout appears. To set the length of timeout, type: Replace time with the duration set in milliseconds. Specifies time without new messages after which the worker thread will be closed. Default setting is one minute. Batch Dequeuing To increase performance, you can configure rsyslog to dequeue multiple messages at once. To set the upper limit for such dequeueing, use: Replace number with the maximum number of messages that can be dequeued at once. Note that a higher setting combined with a higher number of permitted working threads results in greater memory consumption. Terminating Queues When terminating a queue that still contains messages, you can try to minimize the data loss by specifying a time interval for worker threads to finish the queue processing: Specify time in milliseconds. If after that period there are still some enqueued messages, workers finish the current data element and then terminate. Unprocessed messages are therefore lost. Another time interval can be set for workers to finish the final element: In case this timeout expires, any remaining workers are shut down. To save data at shutdown, use: If set, all queue elements are saved to disk before rsyslog terminates. 23.4.4. Using the New Syntax for rsyslog queues In the new syntax available in rsyslog 7, queues are defined inside the action() object that can be used both separately or inside a ruleset in /etc/rsyslog.conf . The format of an action queue is as follows: Replace action_type with the name of the module that is to perform the action and replace queue_size with a maximum number of messages the queue can contain. For queue_type , choose disk or select from one of the in-memory queues: direct , linkedlist or fixedarray . For file_name specify only a file name, not a path. Note that if creating a new directory to hold log files, the SELinux context must be set. See Section 23.4.2, "Creating a New Directory for rsyslog Log Files" for an example. Example 23.13. Defining an Action Queue To configure the output action with an asynchronous linked-list based action queue which can hold a maximum of 10,000 messages, enter a command as follows: The rsyslog 7 syntax for a direct action queues is as follows: The rsyslog 7 syntax for an action queue with multiple parameters can be written as follows: The default work directory, or the last work directory to be set, will be used. If required to use a different work directory, add a line as follows before the action queue: Example 23.14. Forwarding To a Single Server Using the New Syntax The following example is based on the procedure Forwarding To a Single Server in order to show the difference between the traditional sysntax and the rsyslog 7 syntax. The omfwd plug-in is used to provide forwarding over UDP or TCP . The default is UDP . As the plug-in is built in it does not have to be loaded. Use the following configuration in /etc/rsyslog.conf or create a file with the following content in the /etc/rsyslog.d/ directory: Where: queue.type="linkedlist" enables a LinkedList in-memory queue, queue.filename defines a disk storage. The backup files are created with the example_fwd prefix, in the working directory specified by the preceding global workDirectory directive, the action.resumeRetryCount -1 setting prevents rsyslog from dropping messages when retrying to connect if server is not responding, enabled queue.saveOnShutdown="on" saves in-memory data if rsyslog shuts down, the last line forwards all received messages to the logging server, port specification is optional. 23.5. Configuring rsyslog on a Logging Server The rsyslog service provides facilities both for running a logging server and for configuring individual systems to send their log files to the logging server. See Example 23.12, "Reliable Forwarding of Log Messages to a Server" for information on client rsyslog configuration. The rsyslog service must be installed on the system that you intend to use as a logging server and all systems that will be configured to send logs to it. Rsyslog is installed by default in Red Hat Enterprise Linux 7. If required, to ensure that it is, enter the following command as root : The default protocol and port for syslog traffic is UDP and 514 , as listed in the /etc/services file. However, rsyslog defaults to using TCP on port 514 . In the configuration file, /etc/rsyslog.conf , TCP is indicated by @@ . Other ports are sometimes used in examples, however SELinux is only configured to allow sending and receiving on the following ports by default: The semanage utility is provided as part of the policycoreutils-python package. If required, install the package as follows: In addition, by default the SELinux type for rsyslog , rsyslogd_t , is configured to permit sending and receiving to the remote shell ( rsh ) port with SELinux type rsh_port_t , which defaults to TCP on port 514 . Therefore it is not necessary to use semanage to explicitly permit TCP on port 514 . For example, to check what SELinux is set to permit on port 514 , enter a command as follows: For more information on SELinux, see Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide . Perform the steps in the following procedures on the system that you intend to use as your logging server. All steps in these procedure must be made as the root user. Configure SELinux to Permit rsyslog Traffic on a Port If required to use a new port for rsyslog traffic, follow this procedure on the logging server and the clients. For example, to send and receive TCP traffic on port 10514 , proceed with the following sequence of commands: Run the semanage port command with the following parameters: Review the SELinux ports by entering the following command: If the new port was already configured in /etc/rsyslog.conf , restart rsyslog now for the change to take effect: Verify which ports rsyslog is now listening to: See the semanage-port(8) manual page for more information on the semanage port command. Configuring firewalld Configure firewalld to allow incoming rsyslog traffic. For example, to allow TCP traffic on port 10514 , proceed as follows: Where zone is the zone of the interface to use. Note that these changes will not persist after the system start. To make permanent changes to the firewall, repeat the commands adding the --permanent option. For more information on opening and closing ports in firewalld , see the Red Hat Enterprise Linux 7 Security Guide . To verify the above settings, use a command as follows: Configuring rsyslog to Receive and Sort Remote Log Messages Open the /etc/rsyslog.conf file in a text editor and proceed as follows: Add these lines below the modules section but above the Provides UDP syslog reception section: Replace the default Provides TCP syslog reception section with the following: Save the changes to the /etc/rsyslog.conf file. The rsyslog service must be running on both the logging server and the systems attempting to log to it. Use the systemctl command to start the rsyslog service. To ensure the rsyslog service starts automatically in future, enter the following command as root: Your log server is now configured to receive and store log files from the other systems in your environment. 23.5.1. Using The New Template Syntax on a Logging Server Rsyslog 7 has a number of different templates styles. The string template most closely resembles the legacy format. Reproducing the templates from the example above using the string format would look as follows: These templates can also be written in the list format as follows: This template text format might be easier to read for those new to rsyslog and therefore can be easier to adapt as requirements change. To complete the change to the new syntax, we need to reproduce the module load command, add a rule set, and then bind the rule set to the protocol, port, and ruleset: 23.6. Using Rsyslog Modules Due to its modular design, rsyslog offers a variety of modules which provide additional functionality. Note that modules can be written by third parties. Most modules provide additional inputs (see Input Modules below) or outputs (see Output Modules below). Other modules provide special functionality specific to each module. The modules may provide additional configuration directives that become available after a module is loaded. To load a module, use the following syntax: where MODULE represents your desired module. For example, if you want to load the Text File Input Module ( imfile ) that enables rsyslog to convert any standard text files into syslog messages, specify the following line in the /etc/rsyslog.conf configuration file: rsyslog offers a number of modules which are split into the following main categories: Input Modules - Input modules gather messages from various sources. The name of an input module always starts with the im prefix, such as imfile and imjournal . Output Modules - Output modules provide a facility to issue message to various targets such as sending across a network, storing in a database, or encrypting. The name of an output module always starts with the om prefix, such as omsnmp , omrelp , and so on. Parser Modules - These modules are useful in creating custom parsing rules or to parse malformed messages. With moderate knowledge of the C programming language, you can create your own message parser. The name of a parser module always starts with the pm prefix, such as pmrfc5424 , pmrfc3164 , and so on. Message Modification Modules - Message modification modules change content of syslog messages. Names of these modules start with the mm prefix. Message Modification Modules such as mmanon , mmnormalize , or mmjsonparse are used for anonymization or normalization of messages. String Generator Modules - String generator modules generate strings based on the message content and strongly cooperate with the template feature provided by rsyslog . For more information on templates, see Section 23.2.3, "Templates" . The name of a string generator module always starts with the sm prefix, such as smfile or smtradfile . Library Modules - Library modules provide functionality for other loadable modules. These modules are loaded automatically by rsyslog when needed and cannot be configured by the user. A comprehensive list of all available modules and their detailed description can be found at http://www.rsyslog.com/doc/rsyslog_conf_modules.html . Warning Note that when rsyslog loads any modules, it provides them with access to some of its functions and data. This poses a possible security threat. To minimize security risks, use trustworthy modules only. 23.6.1. Importing Text Files The Text File Input Module, abbreviated as imfile , enables rsyslog to convert any text file into a stream of syslog messages. You can use imfile to import log messages from applications that create their own text file logs. To load imfile , add the following into /etc/rsyslog.conf : It is sufficient to load imfile once, even when importing multiple files. The PollingInterval module argument specifies how often rsyslog checks for changes in connected text files. The default interval is 10 seconds, to change it, replace int with a time interval specified in seconds. To identify the text files to import, use the following syntax in /etc/rsyslog.conf : Settings required to specify an input text file: replace path_to_file with a path to the text file. replace tag: with a tag name for this message. Apart from the required directives, there are several other settings that can be applied on the text input. Set the severity of imported messages by replacing severity with an appropriate keyword. Replace facility with a keyword to define the subsystem that produced the message. The keywords for severity and facility are the same as those used in facility/priority-based filters, see Section 23.2.1, "Filters" . Example 23.15. Importing Text Files The Apache HTTP server creates log files in text format. To apply the processing capabilities of rsyslog to apache error messages, first use the imfile module to import the messages. Add the following into /etc/rsyslog.conf : 23.6.2. Exporting Messages to a Database Processing of log data can be faster and more convenient when performed in a database rather than with text files. Based on the type of DBMS used, choose from various output modules such as ommysql , ompgsql , omoracle , or ommongodb . As an alternative, use the generic omlibdbi output module that relies on the libdbi library. The omlibdbi module supports database systems Firebird/Interbase, MS SQL, Sybase, SQLite, Ingres, Oracle, mSQL, MySQL, and PostgreSQL. Example 23.16. Exporting Rsyslog Messages to a Database To store the rsyslog messages in a MySQL database, add the following into /etc/rsyslog.conf : First, the output module is loaded, then the communication port is specified. Additional information, such as name of the server and the database, and authentication data, is specified on the last line of the above example. 23.6.3. Enabling Encrypted Transport Confidentiality and integrity in network transmissions can be provided by either the TLS or GSSAPI encryption protocol. Transport Layer Security (TLS) is a cryptographic protocol designed to provide communication security over the network. When using TLS, rsyslog messages are encrypted before sending, and mutual authentication exists between the sender and receiver. For configuring TLS, see the section called "Configuring Encrypted Message Transfer with TLS" . Generic Security Service API (GSSAPI) is an application programming interface for programs to access security services. To use it in connection with rsyslog you must have a functioning Kerberos environment. For configuring GSSAPI, see the section called "Configuring Encrypted Message Transfer with GSSAPI" . Configuring Encrypted Message Transfer with TLS To use encrypted transport through TLS, you need to configure both the server and the client. Create public key, private key and certificate file, see Section 14.1.11, "Generating a New Key and Certificate" . On the server side, configure the following in the /etc/rsyslog.conf configuration file: Set the gtls netstream driver as the default driver: Provide paths to certificate files: You can merge all global directives into single block if you prefer a less cluttered configuration file. Replace: path_ca.pem with a path to your public key path_cert.pem with a path to the certificate file path_key.pem with a path to the private key Load the imtcp moduleand set driver options: Start a server: Replace: number to specify the driver mode. To enable TCP-only mode, use 1 port with the port number at which to start a listener, for example 10514 The anon setting means that the client is not authenticated. On the client side, configure the following in the /etc/rsyslog.conf configuration file: Load the public key: Replace path_ca.pem with a path to the public key. Set the gtls netstream driver as the default driver: Configure the driver and specify what action will be performed: Replace number , anon , and port with the same values as on the server. On the last line in the above listing, an example action forwards messages from the server to the specified TCP port. Configuring Encrypted Message Transfer with GSSAPI In rsyslog , interaction with GSSAPI is provided by the imgssapi module. To turn on the GSSAPI transfer mode: Put the following configuration in /etc/rsyslog.conf : This directive loads the imgssapi module. Specify the input as follows: Replace name with the name of the GSS server. Replace number to set the maximum number of sessions supported. This number is not limited by default. Replace port with a selected port on which you want to start a GSS server. The USDInputGSSServerPermitPlainTCP on setting permits the server to receive also plain TCP messages on the same port. This is off by default. Note The imgssapi module is initialized as soon as the configuration file reader encounters the USDInputGSSServerRun directive in the /etc/rsyslog.conf configuration file. The supplementary options configured after USDInputGSSServerRun are therefore ignored. For configuration to take effect, all imgssapi configuration options must be placed before USDInputGSSServerRun. Example 23.17. Using GSSAPI The following configuration enables a GSS server on the port 1514 that also permits to receive plain tcp syslog messages on the same port. 23.6.4. Using RELP Reliable Event Logging Protocol (RELP) is a networking protocol for data logging in computer networks. It is designed to provide reliable delivery of event messages, which makes it useful in environments where message loss is not acceptable. Configuring RELP To configure RELP, you need to configure both the server and the client using the /etc/rsyslog.conf file. To configure the client: Load the required modules: Configure the TCP input as follows: Replace port to start a listener at the required port. Configure the transport settings: Replace target_IP and target_port with the IP address and port that identify the target server. To configure the server: Configure loading the module: Configure the TCP input similarly to the client configuration: Replace target_port with the same value as on the clients. Configure the rules and choose an action to be performed. In the following example, log_path specifies the path for storing messages: Configuring RELP with TLS To configure RELP with TLS, you need to configure authentication. Then, you need to configure both the server and the client using the /etc/rsyslog.conf file. Create public key, private key and certificate file. For instructions, see Section 14.1.11, "Generating a New Key and Certificate" . To configure the client: Load the required modules: Configure the TCP input as follows: Replace port to start a listener at the required port. Configure the transport settings: Replace: target_IP and target_port with the IP address and port that identify the target server. path_ca.pem , path_cert.pem , and path_key.pem with paths to the certification files mode with the authentication mode for the transaction. Use either "name" or "fingerprint" peer_name with a certificate fingerprint of the permitted peer. If you specify this, tls.permittedpeer restricts connection to the selected group of peers. The tls="on" setting enables the TLS protocol. To configure the server: Configure loading the module: Configure the TCP input similarly to the client configuration: Replace the highlighted values with the same as on the client. Configure the rules and choose an action to be performed. In the following example, log_path specifies the path for storing messages: 23.7. Interaction of Rsyslog and Journal As mentioned above, Rsyslog and Journal , the two logging applications present on your system, have several distinctive features that make them suitable for specific use cases. In many situations it is useful to combine their capabilities, for example to create structured messages and store them in a file database (see Section 23.8, "Structured Logging with Rsyslog" ). A communication interface needed for this cooperation is provided by input and output modules on the side of Rsyslog and by the Journal 's communication socket. By default, rsyslogd uses the imjournal module as a default input mode for journal files. With this module, you import not only the messages but also the structured data provided by journald . Also, older data can be imported from journald (unless forbidden with the IgnorePreviousMessages option). See Section 23.8.1, "Importing Data from Journal" for basic configuration of imjournal . As an alternative, configure rsyslogd to read from the socket provided by journal as an output for syslog-based applications. The path to the socket is /run/systemd/journal/syslog . Use this option when you want to maintain plain rsyslog messages. Compared to imjournal the socket input currently offers more features, such as ruleset binding or filtering. To import Journal data trough the socket, use the following configuration in /etc/rsyslog.conf : You can also output messages from Rsyslog to Journal with the omjournal module. Configure the output in /etc/rsyslog.conf as follows: For instance, the following configuration forwards all received messages on tcp port 10514 to the Journal: 23.8. Structured Logging with Rsyslog On systems that produce large amounts of log data, it can be convenient to maintain log messages in a structured format . With structured messages, it is easier to search for particular information, to produce statistics and to cope with changes and inconsistencies in message structure. Rsyslog uses the JSON (JavaScript Object Notation) format to provide structure for log messages. Compare the following unstructured log message: with a structured one: Searching structured data with use of key-value pairs is faster and more precise than searching text files with regular expressions. The structure also lets you to search for the same entry in messages produced by various applications. Also, JSON files can be stored in a document database such as MongoDB, which provides additional performance and analysis capabilities. On the other hand, a structured message requires more disk space than the unstructured one. In rsyslog , log messages with meta data are pulled from Journal with use of the imjournal module. With the mmjsonparse module, you can parse data imported from Journal and from other sources and process them further, for example as a database output. For parsing to be successful, mmjsonparse requires input messages to be structured in a way that is defined by the Lumberjack project. The Lumberjack project aims to add structured logging to rsyslog in a backward-compatible way. To identify a structured message, Lumberjack specifies the @cee: string that prepends the actual JSON structure. Also, Lumberjack defines the list of standard field names that should be used for entities in the JSON string. For more information on Lumberjack , see the section called "Online Documentation" . The following is an example of a lumberjack-formatted message: To build this structure inside Rsyslog , a template is used, see Section 23.8.2, "Filtering Structured Messages" . Applications and servers can employ the libumberlog library to generate messages in the lumberjack-compliant form. For more information on libumberlog , see the section called "Online Documentation" . 23.8.1. Importing Data from Journal The imjournal module is Rsyslog 's input module to natively read the journal files (see Section 23.7, "Interaction of Rsyslog and Journal" ). Journal messages are then logged in text format as other rsyslog messages. However, with further processing, it is possible to translate meta data provided by Journal into a structured message. To import data from Journal to Rsyslog , use the following configuration in /etc/rsyslog.conf : With number_of_messages , you can specify how often the journal data must be saved. This will happen each time the specified number of messages is reached. Replace path with a path to the state file. This file tracks the journal entry that was the last one processed. With seconds , you set the length of the rate limit interval. The number of messages processed during this interval cannot exceed the value specified in burst_number . The default setting is 20,000 messages per 600 seconds. Rsyslog discards messages that come after the maximum burst within the time frame specified. With IgnorePreviousMessages you can ignore messages that are currently in Journal and import only new messages, which is used when there is no state file specified. The default setting is off . Please note that if this setting is off and there is no state file, all messages in the Journal are processed, even if they were already processed in a rsyslog session. Note You can use imjournal simultaneously with imuxsock module that is the traditional system log input. However, to avoid message duplication, you must prevent imuxsock from reading the Journal's system socket. To do so, use the SysSock.Use directive: You can translate all data and meta data stored by Journal into structured messages. Some of these meta data entries are listed in Example 23.19, "Verbose journalctl Output" , for a complete list of journal fields see the systemd.journal-fields(7) manual page. For example, it is possible to focus on kernel journal fields , that are used by messages originating in the kernel. 23.8.2. Filtering Structured Messages To create a lumberjack-formatted message that is required by rsyslog 's parsing module, use the following template: This template prepends the @cee: string to the JSON string and can be applied, for example, when creating an output file with omfile module. To access JSON field names, use the USD! prefix. For example, the following filter condition searches for messages with specific hostname and UID : 23.8.3. Parsing JSON The mmjsonparse module is used for parsing structured messages. These messages can come from Journal or from other input sources, and must be formatted in a way defined by the Lumberjack project. These messages are identified by the presence of the @cee: string. Then, mmjsonparse checks if the JSON structure is valid and then the message is parsed. To parse lumberjack-formatted JSON messages with mmjsonparse , use the following configuration in the /etc/rsyslog.conf : In this example, the mmjsonparse module is loaded on the first line, then all messages are forwarded to it. Currently, there are no configuration parameters available for mmjsonparse . 23.8.4. Storing Messages in the MongoDB Rsyslog supports storing JSON logs in the MongoDB document database through the ommongodb output module. To forward log messages into MongoDB, use the following syntax in the /etc/rsyslog.conf (configuration parameters for ommongodb are available only in the new configuration format; see Section 23.3, "Using the New Configuration Format" ): Replace DB_server with the name or address of the MongoDB server. Specify port to select a non-standard port from the MongoDB server. The default port value is 0 and usually there is no need to change this parameter. With DB_name , you identify to which database on the MongoDB server you want to direct the output. Replace collection_name with the name of a collection in this database. In MongoDB, collection is a group of documents, the equivalent of an RDBMS table. You can set your login details by replacing UID and password . You can shape the form of the final database output with use of templates. By default, rsyslog uses a template based on standard lumberjack field names. 23.9. Debugging Rsyslog To run rsyslogd in debugging mode, use the following command: With this command, rsyslogd produces debugging information and prints it to the standard output. The -n stands for "no fork". You can modify debugging with environmental variables, for example, you can store the debug output in a log file. Before starting rsyslogd , type the following on the command line: Replace path with a desired location for the file where the debugging information will be logged. For a complete list of options available for the RSYSLOG_DEBUG variable, see the related section in the rsyslogd(8) manual page. To check if syntax used in the /etc/rsyslog.conf file is valid use: Where 1 represents level of verbosity of the output message. This is a forward compatibility option because currently, only one level is provided. However, you must add this argument to run the validation. 23.10. Using the Journal The Journal is a component of systemd that is responsible for viewing and management of log files. It can be used in parallel, or in place of a traditional syslog daemon, such as rsyslogd . The Journal was developed to address problems connected with traditional logging. It is closely integrated with the rest of the system, supports various logging technologies and access management for the log files. Logging data is collected, stored, and processed by the Journal's journald service. It creates and maintains binary files called journals based on logging information that is received from the kernel, from user processes, from standard output, and standard error output of system services or via its native API. These journals are structured and indexed, which provides relatively fast seek times. Journal entries can carry a unique identifier. The journald service collects numerous meta data fields for each log message. The actual journal files are secured, and therefore cannot be manually edited. 23.10.1. Viewing Log Files To access the journal logs, use the journalctl tool. For a basic view of the logs type as root : An output of this command is a list of all log files generated on the system including messages generated by system components and by users. The structure of this output is similar to one used in /var/log/messages/ but with certain improvements: the priority of entries is marked visually. Lines of error priority and higher are highlighted with red color and a bold font is used for lines with notice and warning priority the time stamps are converted for the local time zone of your system all logged data is shown, including rotated logs the beginning of a boot is tagged with a special line Example 23.18. Example Output of journalctl The following is an example output provided by the journalctl tool. When called without parameters, the listed entries begin with a time stamp, then the host name and application that performed the operation is mentioned followed by the actual message. This example shows the first three entries in the journal log: In many cases, only the latest entries in the journal log are relevant. The simplest way to reduce journalctl output is to use the -n option that lists only the specified number of most recent log entries: Replace Number with the number of lines to be shown. When no number is specified, journalctl displays the ten most recent entries. The journalctl command allows controlling the form of the output with the following syntax: Replace form with a keyword specifying a desired form of output. There are several options, such as verbose , which returns full-structured entry items with all fields, export , which creates a binary stream suitable for backups and network transfer, and json , which formats entries as JSON data structures. For the full list of keywords, see the journalctl(1) manual page. Example 23.19. Verbose journalctl Output To view full meta data about all entries, type: This example lists fields that identify a single log entry. These meta data can be used for message filtering as shown in the section called "Advanced Filtering" . For a complete description of all possible fields see the systemd.journal-fields(7) manual page. 23.10.2. Access Control By default, Journal users without root privileges can only see log files generated by them. The system administrator can add selected users to the adm group, which grants them access to complete log files. To do so, type as root : Here, replace username with a name of the user to be added to the adm group. This user then receives the same output of the journalctl command as the root user. Note that access control only works when persistent storage is enabled for Journal . 23.10.3. Using The Live View When called without parameters, journalctl shows the full list of entries, starting with the oldest entry collected. With the live view, you can supervise the log messages in real time as new entries are continuously printed as they appear. To start journalctl in live view mode, type: This command returns a list of the ten most current log lines. The journalctl utility then stays running and waits for new changes to show them immediately. 23.10.4. Filtering Messages The output of the journalctl command executed without parameters is often extensive, therefore you can use various filtering methods to extract information to meet your needs. Filtering by Priority Log messages are often used to track erroneous behavior on the system. To view only entries with a selected or higher priority, use the following syntax: Here, replace priority with one of the following keywords (or with a number): debug (7), info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0). Example 23.20. Filtering by Priority To view only entries with error or higher priority, use: Filtering by Time To view log entries only from the current boot, type: If you reboot your system just occasionally, the -b will not significantly reduce the output of journalctl . In such cases, time-based filtering is more helpful: With --since and --until , you can view only log messages created within a specified time range. You can pass values to these options in form of date or time or both as shown in the following example. Example 23.21. Filtering by Time and Priority Filtering options can be combined to reduce the set of results according to specific requests. For example, to view the warning or higher priority messages from a certain point in time, type: Advanced Filtering Example 23.19, "Verbose journalctl Output" lists a set of fields that specify a log entry and can all be used for filtering. For a complete description of meta data that systemd can store, see the systemd.journal-fields(7) manual page. This meta data is collected for each log message, without user intervention. Values are usually text-based, but can take binary and large values; fields can have multiple values assigned though it is not very common. To view a list of unique values that occur in a specified field, use the following syntax: Replace fieldname with a name of a field you are interested in. To show only log entries that fit a specific condition, use the following syntax: Replace fieldname with a name of a field and value with a specific value contained in that field. As a result, only lines that match this condition are returned. Note As the number of meta data fields stored by systemd is quite large, it is easy to forget the exact name of the field of interest. When unsure, type: and press the Tab key two times. This shows a list of available field names. Tab completion based on context works on field names, so you can type a distinctive set of letters from a field name and then press Tab to complete the name automatically. Similarly, you can list unique values from a field. Type: and press Tab two times. This serves as an alternative to journalctl -F fieldname . You can specify multiple values for one field: Specifying two matches for the same field results in a logical OR combination of the matches. Entries matching value1 or value2 are displayed. Also, you can specify multiple field-value pairs to further reduce the output set: If two matches for different field names are specified, they will be combined with a logical AND . Entries have to match both conditions to be shown. With use of the + symbol, you can set a logical OR combination of matches for multiple fields: This command returns entries that match at least one of the conditions, not only those that match both of them. Example 23.22. Advanced filtering To display entries created by avahi-daemon.service or crond.service under user with UID 70, use the following command: Since there are two values set for the _SYSTEMD_UNIT field, both results will be displayed, but only when matching the _UID=70 condition. This can be expressed simply as: (UID=70 and (avahi or cron)). You can apply the aforementioned filtering also in the live-view mode to keep track of the latest changes in the selected group of log entries: 23.10.5. Enabling Persistent Storage By default, Journal stores log files only in memory or a small ring-buffer in the /run/log/journal/ directory. This is sufficient to show recent log history with journalctl . This directory is volatile, log data is not saved permanently. With the default configuration, syslog reads the journal logs and stores them in the /var/log/ directory. With persistent logging enabled, journal files are stored in /var/log/journal which means they persist after reboot. Journal can then replace rsyslog for some users (but see the chapter introduction). Enabled persistent storage has the following advantages Richer data is recorded for troubleshooting in a longer period of time For immediate troubleshooting, richer data is available after a reboot Server console currently reads data from journal, not log files Persistent storage has also certain disadvantages: Even with persistent storage the amount of data stored depends on free memory, there is no guarantee to cover a specific time span More disk space is needed for logs To enable persistent storage for Journal, create the journal directory manually as shown in the following example. As root type: Then, restart journald to apply the change: 23.11. Managing Log Files in a Graphical Environment As an alternative to the aforementioned command-line utilities, Red Hat Enterprise Linux 7 provides an accessible GUI for managing log messages. 23.11.1. Viewing Log Files Most log files are stored in plain text format. You can view them with any text editor such as Vi or Emacs . Some log files are readable by all users on the system; however, root privileges are required to read most log files. To view system log files in an interactive, real-time application, use the System Log . Note In order to use the System Log , first ensure the gnome-system-log package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 9.2.4, "Installing Packages" . After you have installed the gnome-system-log package, open the System Log by clicking Applications System Tools System Log , or type the following command at a shell prompt: The application only displays log files that exist; thus, the list might differ from the one shown in Figure 23.2, "System Log" . Figure 23.2. System Log The System Log application lets you filter any existing log file. Click on the button marked with the gear symbol to view the menu, select menu:[ Filters > > Manage Filters ] to define or edit the desired filter. Figure 23.3. System Log - Filters Adding or editing a filter lets you define its parameters as is shown in Figure 23.4, "System Log - defining a filter" . Figure 23.4. System Log - defining a filter When defining a filter, the following parameters can be edited: Name - Specifies the name of the filter. Regular Expression - Specifies the regular expression that will be applied to the log file and will attempt to match any possible strings of text in it. Effect Highlight - If checked, the found results will be highlighted with the selected color. You may select whether to highlight the background or the foreground of the text. Hide - If checked, the found results will be hidden from the log file you are viewing. When you have at least one filter defined, it can be selected from the Filters menu and it will automatically search for the strings you have defined in the filter and highlight or hide every successful match in the log file you are currently viewing. Figure 23.5. System Log - enabling a filter When you select the Show matches only option, only the matched strings will be shown in the log file you are currently viewing. 23.11.2. Adding a Log File To add a log file you want to view in the list, select File Open . This will display the Open Log window where you can select the directory and file name of the log file you want to view. Figure 23.6, "System Log - adding a log file" illustrates the Open Log window. Figure 23.6. System Log - adding a log file Click on the Open button to open the file. The file is immediately added to the viewing list where you can select it and view its contents. Note The System Log also allows you to open log files zipped in the .gz format. 23.11.3. Monitoring Log Files System Log monitors all opened logs by default. If a new line is added to a monitored log file, the log name appears in bold in the log list. If the log file is selected or displayed, the new lines appear in bold at the bottom of the log file. Figure 23.7, "System Log - new log alert" illustrates a new alert in the cron log file and in the messages log file. Clicking on the messages log file displays the logs in the file with the new lines in bold. Figure 23.7. System Log - new log alert 23.12. Additional Resources For more information on how to configure the rsyslog daemon and how to locate, view, and monitor log files, see the resources listed below. Installed Documentation rsyslogd (8) - The manual page for the rsyslogd daemon documents its usage. rsyslog.conf (5) - The manual page named rsyslog.conf documents available configuration options. logrotate (8) - The manual page for the logrotate utility explains in greater detail how to configure and use it. journalctl (1) - The manual page for the journalctl daemon documents its usage. journald.conf (5) - This manual page documents available configuration options. systemd.journal-fields (7) - This manual page lists special Journal fields. Installable Documentation /usr/share/doc/rsyslog version /html/index.html - This file, which is provided by the rsyslog-doc package from the Optional channel, contains information on rsyslog . See Section 9.5.7, "Adding the Optional and Supplementary Repositories" for more information on Red Hat additional channels. Before accessing the documentation, you must run the following command as root : Online Documentation The rsyslog home page offers additional documentation, configuration examples, and video tutorials. Make sure to consult the documents relevant to the version you are using: RainerScript documentation on the rsyslog Home Page - Commented summary of data types, expressions, and functions available in RainerScript . rsyslog version 7 documentation on the rsyslog home page - Version 7 of rsyslog is available for Red Hat Enterprise Linux 7 in the rsyslog package. Description of queues on the rsyslog Home Page - General information on various types of message queues and their usage. See Also Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Chapter 10, Managing Services with systemd provides more information on systemd and documents how to use the systemctl command to manage system services.
|
[
"FACILITY . PRIORITY",
"kern.*",
"mail.crit",
"cron.!info,!debug",
": PROPERTY , [!] COMPARE_OPERATION , \" STRING \"",
":msg, contains, \"error\"",
":hostname, isequal, \"host1\"",
":msg, !regex, \"fatal .* error\"",
"if EXPRESSION then ACTION else ACTION",
"if USDprogramname == 'prog1' then { action(type=\"omfile\" file=\"/var/log/prog1.log\") if USDmsg contains 'test' then action(type=\"omfile\" file=\"/var/log/prog1test.log\") else action(type=\"omfile\" file=\"/var/log/prog1notest.log\") }",
"FILTER PATH",
"cron.* /var/log/cron.log",
"FILTER - PATH",
"FILTER ? DynamicFile",
"@( z NUMBER ) HOST : PORT",
". @192.168.0.1",
". @@example.com:6514",
". @(z9)[2001:db8::1]",
"USDoutchannel NAME , FILE_NAME , MAX_SIZE , ACTION",
"FILTER :omfile:USD NAME",
"USDoutchannel log_rotation, /var/log/test_log.log, 104857600, /home/joe/log_rotation_script",
". :omfile:USDlog_rotation",
"FILTER ^ EXECUTABLE ; TEMPLATE",
". ^test-program;template",
": PLUGIN : DB_HOST , DB_NAME , DB_USER , DB_PASSWORD ; TEMPLATE",
"module(load=\"ommysql\") # Output module for MySQL support module(load=\"ompgsql\") # Output module for PostgreSQL support",
"local5.* stop",
"cron.* stop",
"FILTER ACTION & ACTION & ACTION",
"kern.=crit user1 & ^test-program;temp & @192.168.0.1",
"template(name=\"TEMPLATE_NAME\" type=\"string\" string=\"text %PROPERTY% more text\" [option.OPTION=\"on\"])",
"template(name=\"DynamicFile\" type=\"list\") { constant(value=\"/var/log/test_logs/\") property(name=\"timegenerated\") constant(value\"-test.log\") }",
". ?DynamicFile",
"% PROPERTY_NAME : FROM_CHAR : TO_CHAR : OPTION %",
"%msg%",
"%msg:1:2%",
"%msg:::drop-last-lf%",
"%timegenerated:1:10:date-rfc3339%",
"template(name=\"verbose\" type=\"list\") { property(name=\"syslogseverity\") property(name=\"syslogfacility\") property(name=\"timegenerated\") property(name=\"HOSTNAME\") property(name=\"syslogtag\") property(name=\"msg\") constant(value=\"\\n\") }",
"template(name=\"wallmsg\" type=\"list\") { constant(value=\"\\r\\n\\7Message from syslogd@\") property(name=\"HOSTNAME\") constant(value=\" at \") property(name=\"timegenerated\") constant(value=\" ...\\r\\n \") property(name=\"syslogtag\") constant(value=\" \") property(name=\"msg\") constant(value=\"\\r\\n\") }",
"template(name=\"dbFormat\" type=\"list\" option.sql=\"on\") { constant(value=\"insert into SystemEvents (Message, Facility, FromHost, Priority, DeviceReportedTime, ReceivedAt, InfoUnitID, SysLogTag)\") constant(value=\" values ('\") property(name=\"msg\") constant(value=\"', \") property(name=\"syslogfacility\") constant(value=\", '\") property(name=\"hostname\") constant(value=\"', \") property(name=\"syslogpriority\") constant(value=\", '\") property(name=\"timereported\" dateFormat=\"mysql\") constant(value=\"', '\") property(name=\"timegenerated\" dateFormat=\"mysql\") constant(value=\"', \") property(name=\"iut\") constant(value=\", '\") property(name=\"syslogtag\") constant(value=\"')\") }",
"template(name=\"RSYSLOG_DebugFormat\" type=\"string\" string=\"Debug line with all properties:\\nFROMHOST: '%FROMHOST%', fromhost-ip: '%fromhost-ip%', HOSTNAME: '%HOSTNAME%', PRI: %PRI%,\\nsyslogtag '%syslogtag%', programname: '%programname%', APP-NAME: '%APP-NAME%', PROCID: '%PROCID%', MSGID: '%MSGID%',\\nTIMESTAMP: '%TIMESTAMP%', STRUCTURED-DATA: '%STRUCTURED-DATA%',\\nmsg: '%msg%'\\nescaped msg: '%msg:::drop-cc%'\\nrawmsg: '%rawmsg%'\\n\\n\")",
"template(name=\"RSYSLOG_SyslogProtocol23Format\" type=\"string\" string=\"%PRI%1 %TIMESTAMP:::date-rfc3339% %HOSTNAME% %APP-NAME% %PROCID% %MSGID% %STRUCTURED-DATA% %msg%\\n \")",
"template(name=\"RSYSLOG_FileFormat\" type=\"list\") { property(name=\"timestamp\" dateFormat=\"rfc3339\") constant(value=\" \") property(name=\"hostname\") constant(value=\" \") property(name=\"syslogtag\") property(name=\"msg\" spifno1stsp=\"on\" ) property(name=\"msg\" droplastlf=\"on\" ) constant(value=\"\\n\") }",
"template(name=\"RSYSLOG_TraditionalFileFormat\" type=\"list\") { property(name=\"timestamp\") constant(value=\" \") property(name=\"hostname\") constant(value=\" \") property(name=\"syslogtag\") property(name=\"msg\" spifno1stsp=\"on\" ) property(name=\"msg\" droplastlf=\"on\" ) constant(value=\"\\n\") }",
"template(name=\"ForwardFormat\" type=\"list\") { constant(value=\"<\") property(name=\"pri\") constant(value=\">\") property(name=\"timestamp\" dateFormat=\"rfc3339\") constant(value=\" \") property(name=\"hostname\") constant(value=\" \") property(name=\"syslogtag\" position.from=\"1\" position.to=\"32\") property(name=\"msg\" spifno1stsp=\"on\" ) property(name=\"msg\") }",
"template(name=\"TraditionalForwardFormat\" type=\"list\") { constant(value=\"<\") property(name=\"pri\") constant(value=\">\") property(name=\"timestamp\") constant(value=\" \") property(name=\"hostname\") constant(value=\" \") property(name=\"syslogtag\" position.from=\"1\" position.to=\"32\") property(name=\"msg\" spifno1stsp=\"on\" ) property(name=\"msg\") }",
"global(localHostname=\"machineXY\")",
"rotate log files weekly weekly keep 4 weeks worth of backlogs rotate 4 uncomment this if you want your log files compressed compress",
"/var/log/messages { rotate 5 weekly postrotate /usr/bin/killall -HUP syslogd endscript }",
"[Service] LimitNOFILE=16384",
"USDInputFileName /tmp/inputfile USDInputFileTag tag1: USDInputFileStateFile inputfile-state USDInputRunFileMonitor",
"input(type=\"imfile\" file=\"/tmp/inputfile\" tag=\"tag1:\" statefile=\"inputfile-state\")",
"USDRuleSet rulesetname rule rule2",
"USDRuleSet RSYSLOG_DefaultRuleset",
"ruleset(name=\" rulesetname \") { rule rule2 call rulesetname2 … }",
"input(type=\" input_type \" port=\" port_num \" ruleset=\" rulesetname \");",
"ruleset(name=\"remote-6514\") { action(type=\"omfile\" file=\"/var/log/remote-6514\") } ruleset(name=\"remote-601\") { cron.* action(type=\"omfile\" file=\"/var/log/remote-601-cron\") mail.* action(type=\"omfile\" file=\"/var/log/remote-601-mail\") } input(type=\"imtcp\" port=\"6514\" ruleset=\"remote-6514\"); input(type=\"imtcp\" port=\"601\" ruleset=\"remote-601\");",
"object (queue.type= \"queue_type\" )",
"object (queue.type= \"Direct\" )",
"object (queue.type= \"Disk\" )",
"object (queue.size= \"size\" )",
"object (queue.filename= \"name\" )",
"object (queue.maxfilesize= \"size\" )",
"object (queue.type= \"LinkedList\" )",
"object (queue.type= \"FixedArray\" )",
"object (queue.highwatermark= \"number\" )",
"object (queue.lowwatermark= \"number\" )",
". action(type=\"omfwd\" queue.type=\"LinkedList\" queue.filename=\"example_fwd\" action.resumeRetryCount=\"-1\" queue.saveonshutdown=\"on\" Target=\"example.com\" Port=\"6514\" Protocol=\"tcp\")",
". action(type=\"omfwd\" queue.type=\"LinkedList\" queue.filename=\"example_fwd1\" action.resumeRetryCount=\"-1\" queue.saveonshutdown=\"on\" Target=\"example1.com\" Protocol=\"tcp\") . action(type=\"omfwd\" queue.type=\"LinkedList\" queue.filename=\"example_fwd2\" action.resumeRetryCount=\"-1\" queue.saveonshutdown=\"on\" Target=\"example2.com\" Protocol=\"tcp\")",
"~]# mkdir /rsyslog",
"~]# yum install policycoreutils-python",
"~]# semanage fcontext -a -t syslogd_var_lib_t /rsyslog",
"~]# restorecon -R -v /rsyslog restorecon reset /rsyslog context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:syslogd_var_lib_t:s0",
"~]# ls -Zd /rsyslog drwxr-xr-x. root root system_u:object_r:syslogd_var_lib_t:s0 /rsyslog",
"~]# mkdir /rsyslog/work/",
"global(workDirectory=\"/rsyslog/work\")",
"object (queue.highwatermark= \"number\" )",
"object (queue.maxdiskspace= \"number\" )",
"object (queue.discardmark= \"number\" )",
"object (queue.discardseverity= \"number\" )",
"object (queue.dequeuetimebegin= \"hour\" )",
"object (queue.dequeuetimeend= \"hour\" )",
"object (queue.workerthreadminimummessages= \"number\" )",
"object (queue.workerthreads= \"number\" )",
"object (queue.timeoutworkerthreadshutdown= \"time\" )",
"object (queue.DequeueBatchSize= \"number\" )",
"object (queue.timeoutshutdown= \"time\" )",
"object (queue.timeoutactioncompletion= \"time\" )",
"object (queue.saveonshutdown= \"on\" )",
"action(type=\" action_type \"queue.size=\" queue_size \" queue.type=\" queue_type \" queue.filename=\" file_name \"",
"action(type=\"omfile\" queue.size=\"10000\" queue.type=\"linkedlist\" queue.filename=\"logfile\")",
". action(type=\"omfile\" file=\"/var/lib/rsyslog/ log_file )",
". action(type=\"omfile\" queue.filename=\" log_file \" queue.type=\"linkedlist\" queue.size=\"10000\" )",
"global(workDirectory=\" /directory \")",
". action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\"example_fwd\" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\"example.com\" port=\"6514\" protocol=\"tcp\" )",
"~]# yum install rsyslog",
"~]# semanage port -l | grep syslog syslog_tls_port_t tcp 6514, 10514 syslog_tls_port_t udp 6514, 10514 syslogd_port_t tcp 601, 20514 syslogd_port_t udp 514, 601, 20514",
"~]# yum install policycoreutils-python",
"~]# semanage port -l | grep 514 output omitted rsh_port_t tcp 514 syslogd_port_t tcp 6514, 601 syslogd_port_t udp 514, 6514, 601",
"~]# semanage port -a -t syslogd_port_t -p tcp 10514",
"~]# semanage port -l | grep syslog",
"~]# service rsyslog restart",
"~]# netstat -tnlp | grep rsyslog tcp 0 0 0.0.0.0: 10514 0.0.0.0:* LISTEN 2528/rsyslogd tcp 0 0 :::10514 :::* LISTEN 2528/rsyslogd",
"~]# firewall-cmd --zone=zone --add-port=10514/tcp success",
"~]# firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: 10514/tcp masquerade: no forward-ports: icmp-blocks: rich rules:",
"Define templates before the rules that use them # Per-Host Templates for Remote Systems # USDtemplate TmplAuthpriv, \"/var/log/remote/auth/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log\" USDtemplate TmplMsg, \"/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log\"",
"Provides TCP syslog reception USDModLoad imtcp Adding this ruleset to process remote messages USDRuleSet remote1 authpriv.* ?TmplAuthpriv *.info;mail.none;authpriv.none;cron.none ?TmplMsg USDRuleSet RSYSLOG_DefaultRuleset #End the rule set by switching back to the default rule set USDInputTCPServerBindRuleset remote1 #Define a new input and bind it to the \"remote1\" rule set USDInputTCPServerRun 10514",
"~]# systemctl start rsyslog",
"~]# systemctl enable rsyslog",
"template(name=\"TmplAuthpriv\" type=\"string\" string=\"/var/log/remote/auth/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log\" ) template(name=\"TmplMsg\" type=\"string\" string=\"/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log\" )",
"template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") }",
"module(load=\"imtcp\") ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imtcp\" port=\"10514\" ruleset=\"remote1\")",
"module(load=\" MODULE \")",
"module(load=\"imfile\")",
"module(load=\"imfile\" PollingInterval=\"int\")",
"File 1 input(type=\"imfile\" File=\"path_to_file\" Tag=\"tag:\" Severity=\"severity\" Facility=\"facility\") File 2 input(type=\"imfile\" File=\"path_to_file2\")",
"module(load=\"imfile\") input(type=\"imfile\" File=\"/var/log/httpd/error_log\" Tag=\"apache-error:\")",
"module(load=\"ommysql\") . action(type\"ommysql\" server=\"database-server\" db=\"database-name\" uid=\"database-userid\" pwd=\"database-password\" serverport=\"1234\")",
"global(defaultnetstreamdriver=\"gtls\")",
"global(defaultnetstreamdrivercafile=\"path_ca.pem\" defaultnetstreamdrivercertfile=\"path_cert.pem\" defaultnetstreamdriverkeyfile=\"path_key.pem\")",
"module(load=\"imtcp\" StreamDriver.Mode=\"number\" StreamDriver.AuthMode=\"anon\")",
"input(type=\"imtcp\" port=\"port′′)",
"global(defaultnetstreamdrivercafile=\"path_ca.pem\")",
"global(defaultnetstreamdriver=\"gtls\")",
"module(load=\"imtcp\" streamdrivermode=\"number\" streamdriverauthmode=\"anon\") input(type=\"imtcp\" address=\"server.net\" port=\"port\")",
"USDModLoad imgssapi",
"USDInputGSSServerServiceName name USDInputGSSServerPermitPlainTCP on USDInputGSSServerMaxSessions number USDInputGSSServerRun port",
"USDModLoad imgssapi USDInputGSSServerPermitPlainTCP on USDInputGSSServerRun 1514",
"module(load=\"imuxsock\") module(load=\"omrelp\") module(load=\"imtcp\")",
"input(type=\"imtcp\" port=\" port ′′)",
"action(type=\"omrelp\" target=\" target_IP ′′ port=\" target_port ′′)",
"module(load=\"imuxsock\") module(load=\"imrelp\" ruleset=\"relp\")",
"input(type=\"imrelp\" port=\" target_port ′′)",
"ruleset (name=\"relp\") { action(type=\"omfile\" file=\" log_path \") }",
"module(load=\"imuxsock\") module(load=\"omrelp\") module(load=\"imtcp\")",
"input(type=\"imtcp\" port=\" port ′′)",
"action(type=\"omrelp\" target=\" target_IP ′′ port=\" target_port ′′ tls=\"on\" tls.caCert=\" path_ca.pem \" tls.myCert=\" path_cert.pem \" tls.myPrivKey=\" path_key.pem \" tls.authmode=\" mode \" tls.permittedpeer=[\" peer_name \"] )",
"module(load=\"imuxsock\") module(load=\"imrelp\" ruleset=\"relp\")",
"input(type=\"imrelp\" port=\" target_port ′′ tls=\"on\" tls.caCert=\" path_ca.pem \" tls.myCert=\" path_cert.pem \" tls.myPrivKey=\" path_key.pem \" tls.authmode=\" name \" tls.permittedpeer=[\" peer_name \",\" peer_name1 \",\" peer_name2 \"] )",
"ruleset (name=\"relp\") { action(type=\"omfile\" file=\" log_path \") }",
"module(load=\"imuxsock\" SysSock.Use=\"on\" SysSock.Name=\"/run/systemd/journal/syslog\")",
"module(load=\"omjournal\") action(type=\"omjournal\")",
"module(load=\"imtcp\") module(load=\"omjournal\") ruleset(name=\"remote\") { action(type=\"omjournal\") } input(type=\"imtcp\" port=\"10514\" ruleset=\"remote\")",
"Oct 25 10:20:37 localhost anacron[1395]: Jobs will be executed sequentially",
"{\"timestamp\":\"2013-10-25T10:20:37\", \"host\":\"localhost\", \"program\":\"anacron\", \"pid\":\"1395\", \"msg\":\"Jobs will be executed sequentially\"}",
"@cee: {\"pid\":17055, \"uid\":1000, \"gid\":1000, \"appname\":\"logger\", \"msg\":\"Message text.\"}",
"module(load=\"imjournal\" PersistStateInterval=\"number_of_messages\" StateFile=\"path\" ratelimit.interval=\"seconds\" ratelimit.burst=\"burst_number\" IgnorePreviousMessages=\"off/on\")",
"module(load\"imjournal\") module(load\"imuxsock\" SysSock.Use=\"off\" Socket=\"/run/systemd/journal/syslog\")",
"template(name=\"CEETemplate\" type=\"string\" string=\"%TIMESTAMP% %HOSTNAME% %syslogtag% @cee: %USD!all-json%\\n\")",
"(USD!hostname == \" hostname \" && USD!UID== \" UID \")",
"module(load\"mmjsonparse\") . :mmjsonparse:",
"module(load\"ommongodb\") . action(type=\"ommongodb\" server=\"DB_server\" serverport=\"port\" db=\"DB_name\" collection=\"collection_name\" uid=\"UID\" pwd=\"password\")",
"rsyslogd -dn",
"export RSYSLOG_DEBUGLOG=\" path \" export RSYSLOG_DEBUG=\"Debug\"",
"rsyslogd -N 1",
"journalctl",
"journalctl -- Logs begin at Thu 2013-08-01 15:42:12 CEST, end at Thu 2013-08-01 15:48:48 CEST. -- Aug 01 15:42:12 localhost systemd-journal[54]: Allowing runtime journal files to grow to 49.7M. Aug 01 15:42:12 localhost kernel: Initializing cgroup subsys cpuset Aug 01 15:42:12 localhost kernel: Initializing cgroup subsys cpu [...]",
"journalctl -n Number",
"journalctl -o form",
"journalctl -o verbose [...] Fri 2013-08-02 14:41:22 CEST [s=e1021ca1b81e4fc688fad6a3ea21d35b;i=55c;b=78c81449c920439da57da7bd5c56a770;m=27cc _BOOT_ID=78c81449c920439da57da7bd5c56a770 PRIORITY=5 SYSLOG_FACILITY=3 _TRANSPORT=syslog _MACHINE_ID=69d27b356a94476da859461d3a3bc6fd _HOSTNAME=localhost.localdomain _PID=562 _COMM=dbus-daemon _EXE=/usr/bin/dbus-daemon _CMDLINE=/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation _SYSTEMD_CGROUP=/system/dbus.service _SYSTEMD_UNIT=dbus.service SYSLOG_IDENTIFIER=dbus SYSLOG_PID=562 _UID=81 _GID=81 _SELINUX_CONTEXT=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 MESSAGE=[system] Successfully activated service 'net.reactivated.Fprint' _SOURCE_REALTIME_TIMESTAMP=1375447282839181 [...]",
"usermod -a -G adm username",
"journalctl -f",
"journalctl -p priority",
"journalctl -p err",
"journalctl -b",
"journalctl --since = value --until = value",
"journalctl -p warning --since=\"2013-3-16 23:59:59\"",
"journalctl -F fieldname",
"journalctl fieldname = value",
"journalctl",
"journalctl fieldname =",
"journalctl fieldname = value1 fieldname = value2",
"journalctl fieldname1 = value fieldname2 = value",
"journalctl fieldname1 = value + fieldname2 = value",
"journalctl _UID=70 _SYSTEMD_UNIT=avahi-daemon.service _SYSTEMD_UNIT=crond.service",
"journalctl -f fieldname = value",
"mkdir -p /var/log/journal/",
"systemctl restart systemd-journald",
"~]# yum install gnome-system-log",
"~]USD gnome-system-log",
"~]# yum install rsyslog-doc"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-Viewing_and_Managing_Log_Files
|
Chapter 4. RHEL 8.2.1 release
|
Chapter 4. RHEL 8.2.1 release Red Hat makes Red Hat Enterprise Linux 8 content available quarterly, in between minor releases (8.Y). The quarterly releases are numbered using the third digit (8.Y.1). The new features in the RHEL 8.2.1 release are described below. 4.1. New features JDK Mission Control rebased to version 7.1.1 The JDK Mission Control (JMC) profiler for HotSpot JVMs, provided by the jmc:rhel8 module stream, has been upgraded to version 7.1.1 with the RHEL 8.2.1 release. This update includes numerous bug fixes and enhancements, including: Multiple rule optimizations A new JOverflow view based on Standard Widget Toolkit (SWT) A new flame graph view A new way of latency visualization using the High Dynamic Range (HDR) Histogram The jmc:rhel8 module stream has two profiles: The common profile, which installs the entire JMC application The core profile, which installs only the core Java libraries ( jmc-core ) To install the common profile of the jmc:rhel8 module stream, use: Change the profile name to core to install only the jmc-core package. (BZ#1792519) Rust Toolset rebased to version 1.43 Rust Toolset has been updated to version 1.43. Notable changes include: Useful line numbers are now included in Option and Result panic messages where they were invoked. Expanded support for matching on subslice patterns. The matches! macro provides pattern matching that returns a boolean value. item fragments can be interpolated into traits, impls, and extern blocks. Improved type inference around primitives. Associated constants for floats and integers. To install the Rust Toolset module, run the following command as root : For usage information, see the Using Rust Toolset documentation. (BZ#1811997) Containers registries now support the skopeo sync command With this enhancement, users can use skopeo sync command to synchronize container registries and local registries. The skopeo sync command is useful to synchronize a local container registry mirror, and to populate registries running inside of air-gapped environments. The skopeo sync command requires both source ( --src ) and destination ( --dst ) transports to be specified separately. Available source and destination transports are docker (repository hosted on a container registry) and dir ( directory in a local directory path). The source transports also include yaml (local YAML file path). For information on the usage of skopeo sync , see the skopeo-sync man page. (BZ#1811779) Configuration file container.conf is now available With this enhancement, users and administrators can specify default configuration options and command-line flags for container engines. Container engines read the /usr/share/containers/containers.conf and /etc/containers/containers.conf files if they exist. In the rootless mode, container engines read the USDHOME/.config/containers/containers.conf files. Fields specified in the containers.conf file override the default options, as well as options in previously read containers.conf files. The container.conf file is shared between Podman and Buildah and replaces the libpod.conf file. (BZ#11826486) You can now log into and out from a registry server With this enhancement, you can log into and logout from a specified registry server using the skopeo login and skopeo logout commands. The skopeo login command reads in the username and password from standard input. The username and password can also be set using the --username (or -u ) and --password (or -p ) options. You can specify the path of the authentication file by setting the --authfile flag. The default path is USD{XDG_RUNTIME_DIR}/containers/auth.json . For information on the usage of skopeo login and skopeo logout , see the skopeo-login and skopeo-logout man pages, respectively. (JIRA:RHELPLAN-47311) You can now reset the podman storage With this enhancement, users can use the podman system reset command to reset podman storage back to initial state. The podman system reset command removes all pods, containers, images and volumes. For more information, see the podman-system-reset man page. (JIRA:RHELPLAN-48941)
|
[
"yum module install jmc:rhel8/common",
"yum module install rust-toolset"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.2_release_notes/RHEL-8_2_1_release
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/12.0_release_notes/making_open_source_more_inclusive
|
Chapter 5. OLMConfig [operators.coreos.com/v1]
|
Chapter 5. OLMConfig [operators.coreos.com/v1] Description OLMConfig is a resource responsible for configuring OLM. Type object Required metadata 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OLMConfigSpec is the spec for an OLMConfig resource. status object OLMConfigStatus is the status for an OLMConfig resource. 5.1.1. .spec Description OLMConfigSpec is the spec for an OLMConfig resource. Type object Property Type Description features object Features contains the list of configurable OLM features. 5.1.2. .spec.features Description Features contains the list of configurable OLM features. Type object Property Type Description disableCopiedCSVs boolean DisableCopiedCSVs is used to disable OLM's "Copied CSV" feature for operators installed at the cluster scope, where a cluster scoped operator is one that has been installed in an OperatorGroup that targets all namespaces. When reenabled, OLM will recreate the "Copied CSVs" for each cluster scoped operator. packageServerSyncInterval string PackageServerSyncInterval is used to define the sync interval for packagerserver pods. Packageserver pods periodically check the status of CatalogSources; this specifies the period using duration format (e.g. "60m"). For this parameter, only hours ("h"), minutes ("m"), and seconds ("s") may be specified. When not specified, the period defaults to the value specified within the packageserver. 5.1.3. .status Description OLMConfigStatus is the status for an OLMConfig resource. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. 5.1.4. .status.conditions Description Type array 5.1.5. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 5.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/olmconfigs DELETE : delete collection of OLMConfig GET : list objects of kind OLMConfig POST : create an OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name} DELETE : delete an OLMConfig GET : read the specified OLMConfig PATCH : partially update the specified OLMConfig PUT : replace the specified OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name}/status GET : read status of the specified OLMConfig PATCH : partially update status of the specified OLMConfig PUT : replace status of the specified OLMConfig 5.2.1. /apis/operators.coreos.com/v1/olmconfigs HTTP method DELETE Description delete collection of OLMConfig Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OLMConfig Table 5.2. HTTP responses HTTP code Reponse body 200 - OK OLMConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an OLMConfig Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body OLMConfig schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 202 - Accepted OLMConfig schema 401 - Unauthorized Empty 5.2.2. /apis/operators.coreos.com/v1/olmconfigs/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the OLMConfig HTTP method DELETE Description delete an OLMConfig Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OLMConfig Table 5.9. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OLMConfig Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OLMConfig Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body OLMConfig schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty 5.2.3. /apis/operators.coreos.com/v1/olmconfigs/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the OLMConfig HTTP method GET Description read status of the specified OLMConfig Table 5.16. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OLMConfig Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OLMConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body OLMConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operatorhub_apis/olmconfig-operators-coreos-com-v1
|
Chapter 5. Securing brokers
|
Chapter 5. Securing brokers 5.1. Securing connections When brokers are connected to messaging clients, or brokers are connected to other brokers, you can secure these connections using Transport Layer Security (TLS). There are two TLS configurations that you can use: One-way TLS, where only the broker presents a certificate. This is the most common configuration. Two-way (or mutual ) TLS, where both the broker and the client (or other broker) present certificates. 5.1.1. Configuring one-way TLS The following procedure shows how to configure a given acceptor for one-way TLS. Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given acceptor, add the sslEnabled key and set the value to true . In addition, add the keyStorePath and keyStorePassword keys. Set values that correspond to your broker key store. For example: <acceptor name="artemis">tcp://0.0.0.0:61616?sslEnabled=true;keyStorePath=../etc/broker.keystore;keyStorePassword=1234!</acceptor> 5.1.2. Configuring two-way TLS The following procedure shows how to configure two-way TLS. Prerequisites You must have already configured your given acceptor for one-way TLS. For more information, see Section 5.1.1, "Configuring one-way TLS" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For the acceptor that you previously configured for one-way TLS, add the needClientAuth key. Set the value to true . For example: <acceptor name="artemis">tcp://0.0.0.0:61616?sslEnabled=true;keyStorePath=../etc/broker.keystore;keyStorePassword=1234!;needClientAuth=true</acceptor> The configuration in the preceding step assumes that the client's certificate is signed by a trusted provider. If the client's certificate is not signed by a trusted provider (it is self-signed, for example) then the broker needs to import the client's certificate into a trust store. In this case, add the trustStorePath and trustStorePassword keys. Set values that correspond to your broker trust store. For example: <acceptor name="artemis">tcp://0.0.0.0:61616?sslEnabled=true;keyStorePath=../etc/broker.keystore;keyStorePassword=1234!;needClientAuth=true;trustStorePath=../etc/client.truststore;trustStorePassword=5678!</acceptor> Note AMQ Broker supports multiple protocols, and each protocol and platform has different ways to specify TLS parameters. However, in the case of a client using Core Protocol (a bridge), the TLS parameters are configured on the connector URL, much like on the broker's acceptor. If a self-signed certificate is listed as a trusted certificate in a Java Virtual Machine (JVM) truststore, the JVM does not validate the expiry date of the certificate. In a production environment, Red Hat recommends that you use a certificate that is signed by a Certificate Authority. 5.1.3. TLS configuration options The following table shows all of the available TLS configuration options. Option Note sslEnabled Specifies whether SSL is enabled for the connection. Must be set to true to enable TLS. The default value is false . keyStorePath When used on an acceptor: Path to the TLS keystore on the broker that holds the broker certificates (whether self-signed or signed by an authority). When used on a connector: Path to the TLS keystore on the client that holds the client certificates. This is relevant for a connector only if you are using two-way TLS. Although you can configure this value on the broker, it is downloaded and used by the client. If the client needs to use a different path from that set on the broker, it can override the broker setting by using either the standard javax.net.ssl.keyStore system property or the AMQ-specific org.apache.activemq.ssl.keyStore system property. The AMQ-specific system property is useful if another component on the client is already making use of the standard Java system property. keyStorePassword When used on an acceptor: Password for the keystore on the broker. When used on a connector: Password for the keystore on the client. This is relevant for a connector only if you are using two-way TLS. Although you can configure this value on the broker, it is downloaded and used by the client. If the client needs to use a different password from that set on the broker, then it can override the broker setting by using either the standard javax.net.ssl.keyStorePassword system property or the AMQ-specific org.apache.activemq.ssl.keyStorePassword system property. The AMQ-specific system property is useful if another component on the client is already making use of the standard Java system property. trustStorePath When used on an acceptor: Path to the TLS truststore on the broker that holds the keys of all clients that the broker trusts. This is relevant for an acceptor only if you are using two-way TLS. When used on a connector: Path to TLS truststore on the client that holds the public keys of all brokers that the client trusts. Although you can configure this value on the broker, it is downloaded and used by the client. If the client needs to use a different path from that set on the server then it can override the server-side setting by using either using the standard javax.net.ssl.trustStore system property or the AMQ-specific org.apache.activemq.ssl.trustStore system property. The AMQ-specific system property is useful if another component on the client is already making use of the standard Java system property. trustStorePassword When used on an acceptor: Password for the truststore on the broker. This is relevant for an acceptor only if you are using two-way TLS. When used on a connector: Password for the truststore on the client. Although you can configure this value on the broker, it is downloaded and used by the client. If the client needs to use a different password from that set on the broker, then it can override the broker setting by using either the standard javax.net.ssl.trustStorePassword system property or the AMQ-specific org.apache.activemq.ssl.trustStorePassword system property. The AMQ-specific system property is useful if another component on the client is already making use of the standard Java system property. enabledCipherSuites A comma-separated list of cipher suites used for TLS communication for both acceptors or connectors. Specify the most secure cipher suite(s) supported by your client application. If you specify a comma-separated list of cipher suites that are common to both the broker and the client, or you do not specify any cipher suites, the broker and client mutually negotiate a cipher suite to use. If you do not know which cipher suites to specify, you can first establish a broker-client connection with your client running in debug mode to verify the cipher suites that are common to both the broker and the client. Then, configure enabledCipherSuites on the broker. The cipher suites available depend on the TLS protocol versions used by the broker and clients. If the default TLS protocol version changes after you upgrade the broker, you might need to select an earlier TLS protocol version to ensure that the broker and the clients can use a common cipher suite. For more information, see enabledProtocols . enabledProtocols Whether used on an acceptor or connector, this is a comma-separated list of protocols used for TLS communication. If you don't specify a TLS protocol version, the broker uses the JVM's default version. If the broker uses the default TLS protocol version for the JVM and that version changes after you upgrade the broker, the TLS protocol versions used by the broker and clients might be incompatible. While it is recommended that you use the later TLS protocol version, you can specify an earlier version in enabledProtocols to interoperate with clients that do not support a newer TLS protocol version. needClientAuth This property is only for an acceptor. It instructs a client connecting to the acceptor that two-way TLS is required. Valid values are true or false . The default value is false . 5.2. Authenticating clients 5.2.1. Client authentication methods To configure client authentication on the broker, you can use the following methods: User name- and password-based authentication Directly validate user credentials using one of these options: Check the credentials against a set of properties files stored locally on the broker. You can also configure a guest account that allows limited access to the broker and combine login modules to support more complex use cases. Configure a Lightweight Directory Access Protocol (LDAP) login module to check client credentials against user data stored in a central X.500 directory server. Certificate-based authentication Configure two-way Transport Layer Security (TLS) to require both the broker and client to present certificates for mutual authentication. An administrator must also configure properties files that define approved client users and roles. These properties files are stored on the broker. Kerberos-based authentication Configure the broker to authenticate Kerberos security credentials for the client using the GSSAPI mechanism from the Simple Authentication and Security Layer (SASL) framework. The sections that follow describe how to configure both user-and-password- and certificate-based authentication. Additional resources To learn about complete authentication and authorization workflows for LDAP and Kerberos, see: Section 5.4, "Using LDAP for authentication and authorization" Section 5.5, "Using Kerberos for authentication and authorization" 5.2.2. Configuring user and password authentication based on properties files AMQ Broker supports a flexible role-based security model for applying security to queues based on their addresses. Queues are bound to addresses either one-to-one (for point-to-point messaging) or many-to-one (for publish-subscribe messaging). When a message is sent to an address, the broker looks up the set of queues that are bound to that address and routes the message to that set of queues. When you require basic user and password authentication, use PropertiesLoginModule to define it. This login module checks user credentials against the following configuration files that are stored locally on the broker: artemis-users.properties Used to define users and corresponding passwords artemis-roles.properties Used to define roles and assign users to those roles login.config Used to configure login modules for user and password authentication and guest access The artemis-users.properties file can contain hashed passwords, for security. The following sections show how to configure: Basic user and password authentication User and password authentication that includes guest access 5.2.2.1. Configuring basic user and password authentication The following procedure shows how to configure basic user and password authentication. Procedure Open the <broker_instance_dir> /etc/login.config configuration file. By default, this file in a new AMQ Broker 7.11 instance include the following lines: activemq { org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient debug=false reload=true org.apache.activemq.jaas.properties.user="artemis-users.properties" org.apache.activemq.jaas.properties.role="artemis-roles.properties"; }; activemq Alias for the configuration. org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule The implementation class. sufficient Flag that specifies what level of success is required for the PropertiesLoginModule . The values that you can set are: required : The login module is required to succeed. Authentication continues to proceed down the list of login modules configured under the given alias, regardless of success or failure. requisite : The login module is required to succeed. A failure immediately returns control to the application. Authentication does not proceed down the list of login modules configured under the given alias. sufficient : The login module is not required to succeed. If it is successful, control returns to the application and authentication does not proceed further. If it fails, the authentication attempt proceeds down the list of login modules configured under the given alias. optional : The login module is not required to succeed. Authentication continues down the list of login modules configured under the given alias, regardless of success or failure. org.apache.activemq.jaas.properties.user Specifies the properties file that defines a set of users and passwords for the login module implementation. org.apache.activemq.jaas.properties.role Specifies the properties file that maps users to defined roles for the login module implementation. Open the <broker_instance_dir> /etc/artemis-users.properties configuration file. Add users and assign passwords to the users. For example: user1=secret user2=access user3=myPassword Open the <broker_instance_dir> /etc/artemis-roles.properties configuration file. Assign role names to the users you previously added to the artemis-users.properties file. For example: admin=user1,user2 developer=user3 Open the <broker_instance_dir> /etc/bootstrap.xml configuration file. If necessary, add your security domain alias (in this instance, activemq ) to the file, as shown below: <jaas-security domain="activemq"/> 5.2.2.2. Configuring guest access For a user who does not have login credentials, or whose credentials fail authentication, you can grant limited access to the broker using a guest account. You can create a broker instance with guest access enabled using the command-line switch; --allow-anonymous (the converse of which is --require-login ). The following procedure shows how to configure guest access. Prerequisites This procedure assumes that you have already configured basic user and password authentication. To learn more, see Section 5.2.2.1, "Configuring basic user and password authentication" . Procedure Open the <broker_instance_dir> /etc/login.config configuration file that you previously configured for basic user and password authentication. After the properties login module configuration that you previously added, add a guest login module configuration. For example: activemq { org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient debug=true org.apache.activemq.jaas.properties.user="artemis-users.properties" org.apache.activemq.jaas.properties.role="artemis-roles.properties"; org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule sufficient debug=true org.apache.activemq.jaas.guest.user="guest" org.apache.activemq.jaas.guest.role="restricted"; }; org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule The implementation class. org.apache.activemq.jaas.guest.user The user name assigned to anonymous users. org.apache.activemq.jaas.guest.role The role assigned to anonymous users. Based on the preceding configuration, user and password authentication module is activated if the user supplies credentials. Guest authentication is activated if the user supplies no credentials, or if the credentials supplied are incorrect. 5.2.2.2.1. Guest access example The following example shows configuration of guest access for the use case where only those users with no credentials are logged in as guests. In this example, observe that the order of the login modules is reversed compared with the configuration procedure. Also, the flag attached to the properties login module is changed to requisite . activemq { org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule sufficient debug=true credentialsInvalidate=true org.apache.activemq.jaas.guest.user="guest" org.apache.activemq.jaas.guest.role="guests"; org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule requisite debug=true org.apache.activemq.jaas.properties.user="artemis-users.properties" org.apache.activemq.jaas.properties.role="artemis-roles.properties"; }; Based on the preceding configuration, the guest authentication module is activated if no login credentials are supplied. For this use case, the credentialsInvalidate option must be set to true in the configuration of the guest login module. The properties login module is activated if credentials are supplied. The credentials must be valid. Additional resources For more information on the Java Authentication and Authorization Service (JAAS), see the documentation from your Java vendor. For example, for an Oracle tutorial on configuring login.config , see JAAS Login Configuration File in the Oracle Java documentation. To learn how to configure an LDAP login module to validate client credentials, see Section 5.4.1, "Configuring LDAP to authenticate clients" . For more information about encrypting passwords in configuration files, see Section 5.9.2, "Encrypting a password in a configuration file" . 5.2.3. Configuring certificate-based authentication The Java Authentication and Authorization Service (JAAS) certificate login module handles authentication and authorization for clients that are using Transport Layer Security (TLS). The module requires two-way Transport Layer Security (TLS) to be in use and clients to be configured with their own certificates. Authentication is performed during the TLS handshake, not directly by the JAAS certificate login module. The role of the certificate login module is to: Constrain the set of acceptable users. Only the user Distinguished Names (DNs) explicitly listed in the relevant properties file are eligible to be authenticated. Associate a list of groups with the received user identity. This facilitates authorization. Require the presence of an incoming client certificate (by default, the TLS layer is configured to treat the presence of a client certificate as optional). The certificate login module stores a collection of certificate DNs in a pair of flat text files. The files associate a user name and a list of group IDs with each DN. The certificate login module is implemented by the org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule class. 5.2.3.1. Configuring the broker to use certificate-based authentication The following procedure shows how to configure the broker to use certificate-based authentication. Prerequisites You must have configured the broker to use two-way Transport Layer Security (TLS). For more information, see Section 5.1.2, "Configuring two-way TLS" . Procedure Obtain the Subject Distinguished Names (DNs) from user certificates previously imported to the broker key store. Export the certificate from the key store file into a temporary file. For example: Print the contents of the exported certificate: The output is similar to that shown below: The Owner entry is the Subject DN. The format used to enter the Subject DN depends on your platform. The string above could also be represented as; Configure certificate-based authentication. Open the <broker_instance_dir> /etc/login.config configuration file. Add the certificate login module and reference the user and roles properties files. For example: activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user="artemis-users.properties" org.apache.activemq.jaas.textfiledn.role="artemis-roles.properties"; }; org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule The implementation class. org.apache.activemq.jaas.textfiledn.user Specifies the properties file that defines a set of users and passwords for the login module implementation. org.apache.activemq.jaas.textfiledn.role Specifies the properties file that maps users to defined roles for the login module implementation. Open the <broker_instance_dir> /etc/artemis-users.properties configuration file. Users and their corresponding DNs are defined in this file. For example: system=CN=system,O=Progress,C=US user=CN=humble user,O=Progress,C=US guest=CN=anon,O=Progress,C=DE Based on the preceding configuration, for example, the user named system is mapped to the CN=system,O=Progress,C=US Subject DN. Open the <broker_instance_dir> /etc/artemis-roles.properties configuration file. The available roles and the users who hold those roles are defined in this file. For example: admins=system users=system,user guests=guest In the preceding configuration, for the users role, you list multiple users as a comma-separated list. Ensure that your security domain alias (in this instance, activemq ) is referenced in bootstrap.xml , as shown below: <jaas-security domain="activemq"/> 5.2.3.2. Configuring certificate-based authentication for AMQP clients Use the Simple Authentication and Security Layer (SASL) EXTERNAL mechanism configuration parameter to configure your AQMP client for certificate-based authentication when connecting to a broker. The broker authenticates the Transport Layer Security (TLS)/ Secure Sockets Layer (SSL) certificate of your AMQP client in the same way that it authenticates any certificate: The broker reads the TLS/SSL certificate of the client to obtain an identity from the certificate's subject. The certificate subject is mapped to a broker identity by the certificate login module. The broker then authorizes users based on their roles. The following procedure shows how to configure certificate-based authentication for AMQP clients. To enable your AMQP client to use certificate-based authentication, you must add configuration parameters to the URI that the client uses to connect to the broker. Prerequisites You must have configured: Two-way TLS. For more information, see Section 5.1.2, "Configuring two-way TLS" . The broker to use certificate-based authentication. For more information, see Section 5.2.3.1, "Configuring the broker to use certificate-based authentication" . Procedure Open the resource containing the URI for editing: amqps://localhost:5500 Add the parameter sslEnabled=true to enable TSL/SSL for the connection: amqps://localhost:5500?sslEnabled=true Add parameters related to the client trust store and key store to enable the exchange of TSL/SSL certificates with the broker: amqps://localhost:5500?sslEnabled=true&trustStorePath= <trust_store_path> &trustStorePassword= <trust_store_password> &keyStorePath= <key_store_path> &keyStorePassword= <key_store_password> Add the parameter saslMechanisms=EXTERNAL to request that the broker authenticate the client by using the identity found in its TSL/SSL certificate: amqps://localhost:5500?sslEnabled=true&trustStorePath= <trust_store_path> &trustStorePassword= <trust_store_password> &keyStorePath= <key_store_path> &keyStorePassword= <key_store_password> &saslMechanisms=EXTERNAL Additional resources For more information about certificate-based authentication in AMQ Broker, see Section 5.2.3.1, "Configuring the broker to use certificate-based authentication" . For more information about configuring your AMQP client, go to the Red Hat Customer Portal for product documentation specific to your client. 5.3. Authorizing clients 5.3.1. Client authorization methods To authorize clients to perform operations on the broker such as creating and deleting addresses and queues, and sending and consuming messages, you can use the following methods: User- and role-based authorization Configure broker security settings for authenticated users and roles. Configure LDAP to authorize clients Configure the Lightweight Directory Access Protocol (LDAP) login module to handle both authentication and authorization. The LDAP login module checks incoming credentials against user data stored in a central X.500 directory server and sets permissions based on user roles. Configure Kerberos to authorize clients Configure the Java Authentication and Authorization Service (JAAS) Krb5LoginModule login module to pass credentials to PropertiesLoginModule or LDAPLoginModule login modules, which map the Kerberos-authenticated users to AMQ Broker roles. 5.3.2. Configuring user- and role-based authorization 5.3.2.1. Setting permissions Permissions are defined against queues (based on their addresses) via the <security-setting> element in the broker.xml configuration file. You can define multiple instances of <security-setting> in the <security-settings> element of the configuration file. You can specify an exact address match or you can define a wildcard match using the number sign ( # ) and asterisk ( * ) wildcard characters. Different permissions can be given to the set of queues that match an address. Those permissions are shown in the following table. To allow users to... Use this parameter... Create addresses createAddress Delete addresses deleteAddress Create a durable queue under matching addresses createDurableQueue Delete a durable queue under matching addresses deleteDurableQueue Create a non-durable queue under matching addresses createNonDurableQueue Delete a non-durable queue under matching addresses deleteNonDurableQueue Send a message to matching addresses send Consume a message from a queue bound to matching addresses consume Invoke management operations by sending management messages to the management address manage Browse a queue bound to the matching address browse For each permission, you specify a list of roles that are granted the permission. If a given user has any of the roles, they are granted the permission for that set of addresses. The sections that follow show some configuration examples for permissions. 5.3.2.1.1. Configuring message production for a single address The following procedure shows how to configure message production permissions for a single address. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a single <security-setting> element within the <security-settings> element. For the match key, specify an address. For example: <security-settings> <security-setting match="my.destination"> <permission type="send" roles="producer"/> </security-setting> </security-settings> Based on the preceding configuration, members of the producer role have send permissions for address my.destination . 5.3.2.1.2. Configuring message consumption for a single address The following procedure shows how to configure message consumption permissions for a single address. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a single <security-setting> element within the <security-settings> element. For the match key, specify an address. For example: <security-settings> <security-setting match="my.destination"> <permission type="consume" roles="consumer"/> </security-setting> </security-settings> Based on the preceding configuration, members of the consumer role have consume permissions for address my.destination . 5.3.2.1.3. Configuring complete access on all addresses The following procedure shows how to configure complete access to all addresses and associated queues. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a single <security-setting> element within the <security-settings> element. For the match key, to configure access to all addresses, specify the number sign ( # ) wildcard character. For example: <security-settings> <security-setting match="#"> <permission type="createDurableQueue" roles="guest"/> <permission type="deleteDurableQueue" roles="guest"/> <permission type="createNonDurableQueue" roles="guest"/> <permission type="deleteNonDurableQueue" roles="guest"/> <permission type="createAddress" roles="guest"/> <permission type="deleteAddress" roles="guest"/> <permission type="send" roles="guest"/> <permission type="browse" roles="guest"/> <permission type="consume" roles="guest"/> <permission type="manage" roles="guest"/> </security-setting> </security-settings> Based on the preceding configuration, all permissions are granted to members of the guest role on all queues. This can be useful in a development scenario where anonymous authentication was configured to assign the guest role to every user. Additional resources To learn about configuring more complex use cases, see Section 5.3.2.1.4, "Configuring multiple security settings" . 5.3.2.1.4. Configuring multiple security settings The following example procedure shows how to individually configure multiple security settings for a matching set of addresses. This contrasts with the preceding example in this section, which shows how to grant complete access to all addresses. Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a single <security-setting> element within the <security-settings> element. For the match key, include the number sign ( # ) wildcard character to apply the settings to a matching set of addresses. For example: <security-setting match="globalqueues.europe.#"> <permission type="createDurableQueue" roles="admin"/> <permission type="deleteDurableQueue" roles="admin"/> <permission type="createNonDurableQueue" roles="admin, guest, europe-users"/> <permission type="deleteNonDurableQueue" roles="admin, guest, europe-users"/> <permission type="send" roles="admin, europe-users"/> <permission type="consume" roles="admin, europe-users"/> </security-setting> match=globalqueues.europe.# The number sign ( # ) wildcard character is interpreted by the broker as "any sequence of words". Words are delimited by a period ( . ). In this example, the security settings apply to any address that starts with the string globalqueues.europe. permission type="createDurableQueue" Only users that have the admin role can create or delete durable queues bound to an address that starts with the string globalqueues.europe. permission type="createNonDurableQueue" Any users with the roles admin , guest , or europe-users can create or delete temporary queues bound to an address that starts with the string globalqueues.europe. permission type="send" Any users with the roles admin or europe-users can send messages to queues bound to an address that starts with the string globalqueues.europe. permission type="consume" Any users with the roles admin or europe-users can consume messages from queues bound to an address that starts with the string globalqueues.europe. (Optional) To apply different security settings to a more narrow set of addresses, add another <security-setting> element. For the match key, specify a more specific text string. For example: <security-setting match="globalqueues.europe.orders.#"> <permission type="send" roles="europe-users"/> <permission type="consume" roles="europe-users"/> </security-setting> In the second security-setting element, the globalqueues.europe.orders.# match is more specific than the globalqueues.europe.# match specified in the first security-setting element. For any addresses that match globalqueues.europe.orders.# , the permissions createDurableQueue , deleteDurableQueue , createNonDurableQueue , deleteNonDurableQueue are not inherited from the first security-setting element in the file. For example, for the address globalqueues.europe.orders.plastics , the only permissions that exist are send and consume for the role europe-users . Therefore, because permissions specified in one security-setting block are not inherited by another, you can effectively deny permissions in more specific security-setting blocks simply by not specifying those permissions. 5.3.2.1.5. Configuring a queue with a user When a queue is automatically created, the queue is assigned the user name of the connecting client. This user name is included as metadata on the queue. The name is exposed by JMX and in the AMQ Broker management console. The following procedure shows how to add a user name to a queue that you have manually defined in the broker configuration. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given queue, add the user key. Assign a value. For example: <address name="ExampleQueue"> <anycast> <queue name="ExampleQueue" user="admin"/> </anycast> </address> Based on the preceding configuration, the admin user is assigned to queue ExampleQueue . Note Configuring a user on a queue does not change any of the security semantics for that queue - it is only used for metadata on that queue. The mapping between users and what roles they have is handled by a component called the security manager . The security manager reads user credentials from a properties file stored on the broker. By default, AMQ Broker uses the org.apache.activemq.artemis.spi.core.security.ActiveMQJAASSecurityManager security manager. This default security manager provides integration with JAAS and Red Hat JBoss Enterprise Application Platform (JBoss EAP) security. To learn how to use a custom security manager, see Section 5.6.2, "Specifying a custom security manager" . 5.3.2.2. Configuring role-based access control Role-based access control (RBAC) is used to restrict access to the attributes and methods of MBeans. RBAC enables administrators to grant the correct level of access to all users like web console, management interface, core messages, and so on, based on role. 5.3.2.2.1. Configuring role-based access The following example procedure shows how to map roles to particular MBeans and their attributes and methods. Prerequisites You must first define users and roles. For more information, see Section 5.2.2.1, "Configuring basic user and password authentication" . Procedure Open the <broker_instance_dir> /etc/management.xml configuration file. Search for the role-access element and edit the configuration. For example: <role-access> <match domain="org.apache.activemq.artemis"> <access method="list*" roles="view,update,amq"/> <access method="get*" roles="view,update,amq"/> <access method="is*" roles="view,update,amq"/> <access method="set*" roles="update,amq"/> <access method="*" roles="amq"/> </match> </role-access> In this case, a match is applied to any MBean attribute that has the domain name org.apache.activemq.apache . Access of the view , update , or amq role to a matching MBean attribute is controlled by which of the list* , get* , set* , is* , and * access methods that you add the role to. The method="*" (wildcard) syntax is used as a catch-all way to specify every other method that is not listed in the configuration. Each of the access methods in the configuration is converted to an MBean method call. An invoked MBean method is matched against the methods listed in the configuration. For example, if you invoke a method called listMessages on an MBean with the org.apache.activemq.artemis domain, then the broker matches access back to the roles defined in the configuration for the list method. You can also configure access by using the full MBean method name. For example: <access method="listMessages" roles="view,update,amq"/> Start or restart the broker. On Linux: <broker_instance_dir> /bin/artemis run On Windows: <broker_instance_dir> \bin\artemis-service.exe start You can also match specific MBeans within a domain by adding a key attribute that matches an MBean property. 5.3.2.2.2. Role-based access examples This section shows the following examples of applying role-based access control: Mapping roles to all queues in a domain . Mapping roles to a specific queue in a domain . Mapping roles to all queue names that include a specified prefix . Mapping different roles to different sets of queues . The following example shows how to use the key attribute to map roles to all queues in a specified domain. <match domain="org.apache.activemq.artemis" key="subcomponent=queues"> <access method="list*" roles="view,update,amq"/> <access method="get*" roles="view,update,amq"/> <access method="is*" roles="view,update,amq"/> <access method="set*" roles="update,amq"/> <access method="*" roles="amq"/> </match> The following example shows how to use the key attribute to map roles to a specific, named queue. In this example, the named queue is exampleQueue . <match domain="org.apache.activemq.artemis" key="queue=exampleQueue"> <access method="list*" roles="view,update,amq"/> <access method="get*" roles="view,update,amq"/> <access method="is*" roles="view,update,amq"/> <access method="set*" roles="update,amq"/> <access method="*" roles="amq"/> </match> The following example shows how to map roles to every queue whose name includes a specified prefix. In this example, an asterisk ( * ) wildcard operator is used to match all queue names that start with the prefix example . <match domain="org.apache.activemq.artemis" key="queue=example*"> <access method="list*" roles="view,update,amq"/> <access method="get*" roles="view,update,amq"/> <access method="is*" roles="view,update,amq"/> <access method="set*" roles="update,amq"/> <access method="*" roles="amq"/> </match> You might want to map roles differently for different sets of the same attribute (for example, different sets of queues). In this case, you can include multiple match elements in your configuration file. However, it is then possible to have multiple matches in the same domain. For example, consider two <match> elements configured as follows: <match domain="org.apache.activemq.artemis" key="queue=example*"> and <match domain="org.apache.activemq.artemis" key="queue=example.sub*"> Based on this configuration, a queue named example.sub.queue in the org.apache.activemq.artemis domain matches both wildcard key expressions. Therefore, the broker needs a prioritization scheme to decide which set of roles to map to the queue; the roles specified in the first match element, or those specified in the second match element. When there are multiple matches in the same domain, the broker uses the following prioritization scheme when mapping roles: Exact matches are prioritized over wildcard matches Longer wildcard matches are prioritized over shorter wildcard matches In this example, because the longer wildcard expression matches the queue name of example.sub.queue most closely, the broker applies the role-mapping configured in the second <match> element. Note The default-access element is a catch-all element for every method call that is not handled using the role-access or whitelist configurations. The default-access and role-access elements have the same match element semantics. 5.3.2.2.3. Configuring the whitelist element A whitelist is a set of pre-approved domains or MBeans that do not require user authentication. You can provide a list of domains, or list of MBeans, or both, that must bypass the authentication. For example, you might use the whitelist to specify any MBeans that are needed by the AMQ Broker management console to run. The following example procedure shows how to configure the whitelist element. Procedure Open the <broker_instance_dir> /etc/management.xml configuration file. Search for the whitelist element and edit the configuration: <whitelist> <entry domain="hawtio"/> </whitelist> In this example, any MBean with the domain hawtio is allowed access without authentication. You can also use wildcard entries of the form <entry domain="hawtio" key="type=*"/> for the MBean properties to match. Start or restart the broker. On Linux: <broker_instance_dir> /bin/artemis run On Windows: <broker_instance_dir> \bin\artemis-service.exe start 5.3.2.3. Setting resource limits Sometimes it is helpful to set particular limits on what certain users can do beyond the normal security settings related to authorization and authentication. 5.3.2.3.1. Configuring connection and queue limits The following example procedure shows how to limit the number of connections and queues that a user can create. Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a resource-limit-settings element. Specify values for max-connections and max-queues . For example: <resource-limit-settings> <resource-limit-setting match="myUser"> <max-connections>5</max-connections> <max-queues>3</max-queues> </resource-limit-setting> </resource-limit-settings> max-connections Defines how many sessions the matched user can create on the broker. The default is -1 , which means that there is no limit. If you want to limit the number of sessions, take into account that each connection to the broker from an AMQ Core Protocol JMS client creates two sessions. max-queues Defines how many queues the matched user can create. The default is -1 , which means that there is no limit. Note Unlike the match string that you can specify in the address-setting element of a broker configuration, the match string that you specify in resource-limit-settings cannot use wildcard syntax. Instead, the match string defines a specific user to which the resource limit settings are applied. 5.4. Using LDAP for authentication and authorization The LDAP login module enables authentication and authorization by checking the incoming credentials against user data stored in a central X.500 directory server. It is implemented by org.apache.activemq.artemis.spi.core.security.jaas.LDAPLoginModule . 5.4.1. Configuring LDAP to authenticate clients The following example procedure shows how to use LDAP to authenticate clients. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the security-settings element, add a security-setting element to configure permissions. For example: <security-settings> <security-setting match="#"> <permission type="createDurableQueue" roles="user"/> <permission type="deleteDurableQueue" roles="user"/> <permission type="createNonDurableQueue" roles="user"/> <permission type="deleteNonDurableQueue" roles="user"/> <permission type="send" roles="user"/> <permission type="consume" roles="user"/> </security-setting> </security-settings> The preceding configuration assigns specific permissions for all queues to members of the user role. Open the <broker_instance_dir> /etc/login.config file. Configure the LDAP login module, based on the directory service you are using. If you are using the Microsoft Active Directory directory service, add a configuration that resembles this example: activemq { org.apache.activemq.artemis.spi.core.security.jaas.LDAPLoginModule required debug=true initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connectionURL="LDAP://localhost:389" connectionUsername="CN=Administrator,CN=Users,OU=System,DC=example,DC=com" connectionPassword=redhat.123 connectionProtocol=s connectionTimeout="5000" authentication=simple userBase="dc=example,dc=com" userSearchMatching="(CN={0})" userSearchSubtree=true readTimeout="5000" roleBase="dc=example,dc=com" roleName=cn roleSearchMatching="(member={0})" roleSearchSubtree=true ; }; Note If you are using Microsoft Active Directory, and a value that you need to specify for an attribute of connectionUsername contains a space (for example, OU=System Accounts ), then you must enclose the value in a pair of double quotes ( "" ) and use a backslash ( \ ) to escape each double quote in the pair. For example, connectionUsername="CN=Administrator,CN=Users,OU=\"System Accounts\",DC=example,DC=com" . If you are using the ApacheDS directory service, add a configuration that resembles this example: activemq { org.apache.activemq.artemis.spi.core.security.jaas.LDAPLoginModule required debug=true initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connectionURL="ldap://localhost:10389" connectionUsername="uid=admin,ou=system" connectionPassword=secret connectionProtocol=s connectionTimeout=5000 authentication=simple userBase="dc=example,dc=com" userSearchMatching="(uid={0})" userSearchSubtree=true userRoleName= readTimeout=5000 roleBase="dc=example,dc=com" roleName=cn roleSearchMatching="(member={0})" roleSearchSubtree=true ; }; debug Turn debugging on ( true ) or off ( false ). The default value is false . initialContextFactory Must always be set to com.sun.jndi.ldap.LdapCtxFactory connectionURL Location of the directory server using an LDAP URL, __<ldap://Host:Port>. One can optionally qualify this URL, by adding a forward slash, / , followed by the DN of a particular node in the directory tree. The default port of Apache DS is 10389 while for Microsoft AD the default is 389 . connectionUsername Distinguished Name (DN) of the user that opens the connection to the directory server. For example, uid=admin,ou=system . Directory servers generally require clients to present username/password credentials in order to open a connection. connectionPassword Password that matches the DN from connectionUsername . In the directory server, in the Directory Information Tree (DIT), the password is normally stored as a userPassword attribute in the corresponding directory entry. connectionProtocol Any value is supported but is effectively unused. This option must be set explicitly because it has no default value. connectionTimeout Specify the maximum time, in milliseconds, that the broker can take to connect to the directory server. If the broker cannot connect to the directory sever within this time, it aborts the connection attempt. If you specify a value of zero or less for this property, the timeout value of the underlying TCP protocol is used instead. If you do not specify a value, the broker waits indefinitely to establish a connection, or the underlying network times out. When connection pooling has been requested for a connection, then this property specifies the maximum time that the broker waits for a connection when the maximum pool size has already been reached and all connections in the pool are in use. If you specify a value of zero or less, the broker waits indefinitely for a connection to become available. Otherwise, the broker aborts the connection attempt when the maximum wait time has been reached. authentication Specifies the authentication method used when binding to the LDAP server. This parameter can be set to either simple (which requires a username and password) or none (which allows anonymous access). userBase Select a particular subtree of the DIT to search for user entries. The subtree is specified by a DN, which specifies the base node of the subtree. For example, by setting this option to ou=User,ou=ActiveMQ,ou=system , the search for user entries is restricted to the subtree beneath the ou=User,ou=ActiveMQ,ou=system node. userSearchMatching Specify an LDAP search filter, which is applied to the subtree selected by userBase . See the Section 5.4.1.1, "Search matching parameters" section below for more information. userSearchSubtree Specify the search depth for user entries, relative to the node specified by userBase . This option is a Boolean. Specifying a value of false means that the search tries to match one of the child entries of the userBase node (maps to javax.naming.directory.SearchControls.ONELEVEL_SCOPE ). Specifying a value of true means that the search tries to match any entry belonging to the subtree of the userBase node (maps to javax.naming.directory.SearchControls.SUBTREE_SCOPE ). userRoleName Name of the attribute of the user entry that contains a list of role names for the user. Role names are interpreted as group names by the broker's authorization plug-in. If this option is omitted, no role names are extracted from the user entry. readTimeout Specify the maximum time, in milliseconds, that the broker can wait to receive a response from the directory server to an LDAP request. If the broker does not receive a response from the directory server in this time, the broker aborts the request. If you specify a value of zero or less, or you do not specify a value, the broker waits indefinitely for a response from the directory server to an LDAP request. roleBase If role data is stored directly in the directory server, one can use a combination of role options ( roleBase , roleSearchMatching , roleSearchSubtree , and roleName ) as an alternative to (or in addition to) specifying the userRoleName option. This option selects a particular subtree of the DIT to search for role/group entries. The subtree is specified by a DN, which specifies the base node of the subtree. For example, by setting this option to ou=Group,ou=ActiveMQ,ou=system , the search for role/group entries is restricted to the subtree beneath the ou=Group,ou=ActiveMQ,ou=system node. roleName Attribute type of the role entry that contains the name of the role/group (such as C, O, OU, etc.). If this option is omitted the role search feature is effectively disabled. roleSearchMatching Specify an LDAP search filter, which is applied to the subtree selected by roleBase . See the Section 5.4.1.1, "Search matching parameters" section below for more information. roleSearchSubtree Specify the search depth for role entries, relative to the node specified by roleBase . If set to false (which is the default) the search tries to match one of the child entries of the roleBase node (maps to javax.naming.directory.SearchControls.ONELEVEL_SCOPE ). If true it tries to match any entry belonging to the subtree of the roleBase node (maps to javax.naming.directory.SearchControls.SUBTREE_SCOPE ). Note Apache DS uses the OID portion of DN path. Microsoft Active Directory uses the CN portion. For example, you might use a DN path such as oid=testuser,dc=example,dc=com in Apache DS, while you might use a DN path such as cn=testuser,dc=example,dc=com in Microsoft Active Directory. Start or restart the broker (service or process). 5.4.1.1. Search matching parameters userSearchMatching Before passing to the LDAP search operation, the string value provided in this configuration parameter is subjected to string substitution, as implemented by the java.text.MessageFormat class. This means that the special string, {0} , is substituted by the username, as extracted from the incoming client credentials. After substitution, the string is interpreted as an LDAP search filter (the syntax is defined by the IETF standard RFC 2254). For example, if this option is set to (uid={0}) and the received username is jdoe , the search filter becomes (uid=jdoe) after string substitution. If the resulting search filter is applied to the subtree selected by the user base, ou=User,ou=ActiveMQ,ou=system , it would match the entry, uid=jdoe,ou=User,ou=ActiveMQ,ou=system . roleSearchMatching This works in a similar manner to the userSearchMatching option, except that it supports two substitution strings. The substitution string {0} substitutes the full DN of the matched user entry (that is, the result of the user search). For example, for the user, jdoe , the substituted string could be uid=jdoe,ou=User,ou=ActiveMQ,ou=system . The substitution string {1} substitutes the received user name. For example, jdoe . If this option is set to (member=uid={1}) and the received user name is jdoe , the search filter becomes (member=uid=jdoe) after string substitution (assuming ApacheDS search filter syntax). If the resulting search filter is applied to the subtree selected by the role base, ou=Group,ou=ActiveMQ,ou=system , it matches all role entries that have a member attribute equal to uid=jdoe (the value of a member attribute is a DN). This option must always be set, even if role searching is disabled, because it has no default value. If OpenLDAP is used, the syntax of the search filter is (member:=uid=jdoe) . Additional resources For a short introduction to the search filter syntax, see Oracle JNDI tutorial . 5.4.2. Configuring LDAP authorization The LegacyLDAPSecuritySettingPlugin security settings plugin reads the security information previously handled in AMQ 6 by LDAPAuthorizationMap and cachedLDAPAuthorizationMap and converts this information to corresponding AMQ 7 security settings, where possible. The security implementations for brokers in AMQ 6 and AMQ 7 do not match exactly. Therefore, the plugin performs some translation between the two versions to achieve near-equivalent functionality. The following example shows how to configure the plugin. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the security-settings element, add the security-setting-plugin element. For example: <security-settings> <security-setting-plugin class-name="org.apache.activemq.artemis.core.server.impl.LegacyLDAPSecuritySettingPlugin"> <setting name="initialContextFactory" value="com.sun.jndi.ldap.LdapCtxFactory"/> <setting name="connectionURL" value="ldap://localhost:1024"/>`ou=destinations,o=ActiveMQ,ou=system` <setting name="connectionUsername" value="uid=admin,ou=system"/> <setting name="connectionPassword" value="secret"/> <setting name="connectionProtocol" value="s"/> <setting name="authentication" value="simple"/> </security-setting-plugin> </security-settings> class-name The implementation is org.apache.activemq.artemis.core.server.impl.LegacyLDAPSecuritySettingPlugin . initialContextFactory The initial context factory used to connect to LDAP. It must always be set to com.sun.jndi.ldap.LdapCtxFactory (that is, the default value). connectionURL Specifies the location of the directory server using an LDAP URL, <ldap://Host:Port> . You can optionally qualify this URL by adding a forward slash, / , followed by the distinguished name (DN) of a particular node in the directory tree. For example, ldap://ldapserver:10389/ou=system . The default value is ldap://localhost:1024 . connectionUsername The DN of the user that opens the connection to the directory server. For example, uid=admin,ou=system . Directory servers generally require clients to present username/password credentials in order to open a connection. connectionPassword The password that matches the DN from connectionUsername . In the directory server, in the Directory Information Tree (DIT), the password is normally stored as a userPassword attribute in the corresponding directory entry. connectionProtocol Currently unused. In the future, this option might allow you to select the Secure Socket Layer (SSL) for the connection to the directory server. This option must be set explicitly because it has no default value. authentication Specifies the authentication method used when binding to the LDAP server. Valid values for this parameter are simple (username and password) or none (anonymous). The default value is simple . Note Simple Authentication and Security Layer (SASL) authentication is not supported. Other settings not shown in the preceding configuration example are: destinationBase Specifies the DN of the node whose children provide the permissions for all destinations. In this case, the DN is a literal value (that is, no string substitution is performed on the property value). For example, a typical value of this property is ou=destinations,o=ActiveMQ,ou=system The default value is ou=destinations,o=ActiveMQ,ou=system . filter Specifies an LDAP search filter, which is used when looking up the permissions for any kind of destination. The search filter attempts to match one of the children or descendants of the queue or topic node. The default value is (cn=*) . roleAttribute Specifies an attribute of the node matched by filter whose value is the DN of a role. The default value is uniqueMember . adminPermissionValue Specifies a value that matches the admin permission. The default value is admin . readPermissionValue Specifies a value that matches the read permission. The default value is read . writePermissionValue Specifies a value that matches the write permission. The default value is write . enableListener Specifies whether to enable a listener that automatically receives updates made in the LDAP server and update the broker's authorization configuration in real time. The default value is true . mapAdminToManage Specifies whether to map the legacy (that is, AMQ 6) admin permission to the AMQ 7 manage permission. See details of the mapping semantics in the table below. The default value is false . The name of the queue or topic defined in LDAP serves as the "match" for the security setting, the permission value is mapped from the AMQ 6 type to the AMQ 7 type, and the role is mapped as-is. Because the name of the queue or topic defined in LDAP serves as the match for the security setting, the security setting may not be applied as expected to JMS destinations. This is because AMQ 7 always prefixes JMS destinations with "jms.queue." or "jms.topic.", as necessary. AMQ 6 has three permission types - read , write , and admin . These permission types are described on the ActiveMQ website; Security . AMQ 7 has the following permission types: createAddress deleteAddress createDurableQueue deleteDurableQueue createNonDurableQueue deleteNonDurableQueue send consume manage browse This table shows how the security settings plugin maps AMQ 6 permission types to AMQ 7 permission types: AMQ 6 permission type AMQ 7 permission type read consume, browse write send admin createAddress, deleteAddress, createDurableQueue, deleteDurableQueue, createNonDurableQueue, deleteNonDurableQueue, manage (if mapAdminToManage is set to true ) As described below, there are some cases in which the plugin performs some translation between the AMQ 6 and AMQ 7 permission types to achieve equivalence: The mapping does not include the AMQ 7 manage permission type by default because there is no analogous permission type in AMQ 6. However, if mapAdminToManage is set to true , the plugin maps the AMQ 6 admin permission to the AMQ 7 manage permission. The admin permission type in AMQ 6 determines whether the broker automatically creates a destination if the destination does not exist and the user sends a message to it. AMQ 7 automatically allows automatic creation of a destination if the user has permission to send messages to the destination. Therefore, the plugin maps the legacy admin permission to the AMQ 7 permissions shown above, by default. The plugin also maps the AMQ 6 admin permission to the AMQ 7 manage permission if mapAdminToManage is set to true . allowQueueAdminOnRead Whether or not to map the legacy read permission to the createDurableQueue, createNonDurableQueue, and deleteDurableQueue permissions so that JMS clients can create durable and non-durable subscriptions without needing the admin permission. This was allowed in AMQ 6. The default value is false. This table shows how the security settings plugin maps AMQ 6 permission types to AMQ 7 permission types when allowQueueAdminOnRead is true : AMQ 6 permission type AMQ 7 permission type read consume, browse, createDurableQueue, createNonDurableQueue, deleteDurableQueue write send admin createAddress, deleteAddress, deleteNonDurableQueue, manage (if mapAdminToManage is set to true ) 5.4.3. Encrypting the password in the login.config file Because organizations frequently securely store data with LDAP, the login.config file can contain the configuration required for the broker to communicate with the organization's LDAP server. This configuration file usually includes a password to log in to the LDAP server, so this password needs to be encrypted. Prerequisites Ensure that you have modified the login.config file to add the required properties, as described in Section 5.4.2, "Configuring LDAP authorization" . Procedure The following procedure shows how to mask the value of the connectionPassword parameter found in the <broker_instance_dir> /etc/login.config file. From a command prompt, use the mask utility to encrypt the password: Open the <broker_instance_dir> /etc/login.config file. Locate the connectionPassword parameter: Replace the plain-text password with the encrypted value: Wrap the encrypted value with the identifier "ENC()" : The login.config file now contains a masked password. Because the password is wrapped with the "ENC()" identifier, AMQ Broker decrypts it before it is used. Additional resources For more information about the configuration files included with AMQ Broker, see AMQ Broker configuration files and locations . 5.4.4. Mapping external roles You can map roles from external authentication providers such as LDAP to roles used internally by the broker. To map external roles, create role-mapping entries in a security-settings element in the broker.xml configuration file. For example: <security-settings> ... <role-mapping from="cn=admins,ou=Group,ou=ActiveMQ,ou=system" to="my-admin-role"/> <role-mapping from="cn=users,ou=Group,ou=ActiveMQ,ou=system" to="my-user-role"/> </security-settings> Note Role mapping is additive. That means the user will keep the original role(s) as well as the newly assigned role(s). Role mapping only affects the roles authorizing queue access and does not provide a method to enable web console access. 5.5. Using Kerberos for authentication and authorization When sending and receiving messages with the AMQP protocol, clients can send Kerberos security credentials that AMQ Broker authenticates by using the GSSAPI mechanism from the Simple Authentication and Security Layer (SASL) framework. Kerberos credentials can also be used for authorization by mapping an authenticated user to an assigned role configured in an LDAP directory or text-based properties file. You can use SASL in tandem with Transport Layer Security (TLS) to secure your messaging applications. SASL provides user authentication, and TLS provides data integrity. Important You must deploy and configure a Kerberos infrastructure before AMQ Broker can authenticate and authorize Kerberos credentials. See your operating system documentation for more information about deploying Kerberos. For RHEL 7, see Using Kerberos . For Windows, see Kerberos Authentication Overview . Users of an Oracle or IBM JDK should install the Java Cryptography Extension (JCE). See the documentation from the Oracle version of the JCE or the IBM version of the JCE for more information. The following procedures show how to configure Kerberos for authentication and authorization. 5.5.1. Configuring network connections to use Kerberos AMQ Broker integrates with Kerberos security credentials by using the GSSAPI mechanism from the Simple Authentication and Security Layer (SASL) framework. To use Kerberos in AMQ Broker, each acceptor authenticating or authorizing clients that use a Kerberos credential must be configured to used the GSSAPI mechanism. The following procedure shows how to configure an acceptor to use Kerberos. Prerequisites You must deploy and configure a Kerberos infrastructure before AMQ Broker can authenticate and authorize Kerberos credentials. Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the name-value pair saslMechanisms=GSSAPI to the query string of the URL for the acceptor . The preceding configuration means that the acceptor uses the GSSAPI mechanism when authenticating Kerberos credentials. (Optional) The PLAIN and ANONYMOUS SASL mechanisms are also supported. To specify multiple mechanisms, use a comma-separated list. For example: The result is an acceptor that uses both the GSSAPI and PLAIN SASL mechanisms. Start the broker. On Linux: On Windows: Additional resources For more information about acceptors, see Section 2.1, "About acceptors" . 5.5.2. Authenticating clients with Kerberos credentials AMQ Broker supports Kerberos authentication of AMQP connections that use the GSSAPI mechanism from the Simple Authentication and Security Layer (SASL) framework. A broker acquires its Kerberos acceptor credentials by using the Java Authentication and Authorization Service (JAAS). The JAAS library included with your Java installation is packaged with a login module, Krb5LoginModule , that authenticates Kerberos credentials. See the documentation from your Java vendor for more information about their Krb5LoginModule . For example, Oracle provides information about their Krb5LoginModule login module as part of their Java 8 documentation . Prerequisites You must enable the GSSAPI mechanism of an acceptor before it can authenticate AMQP connections using Kerberos security credentials. For more information, see Section 5.5.1, "Configuring network connections to use Kerberos" . Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/login.config configuration file. Add a configuration scope named amqp-sasl-gssapi . The following example shows configuration for the Krb5LoginModule found in Oracle and OpenJDK versions of the JDK. amqp-sasl-gssapi By default, the GSSAPI mechanism implementation on the broker uses a JAAS configuration scope named amqp-sasl-gssapi to obtain its Kerberos acceptor credentials. Krb5LoginModule This version of the Krb5LoginModule is provided by the Oracle and OpenJDK versions of the JDK. Verify the fully qualified class name of the Krb5LoginModule and its available options by referring to the documentation from your Java vendor. useKeyTab The Krb5LoginModule is configured to use a Kerberos keytab when authenticating a principal. Keytabs are generated using tooling from your Kerberos environment. See the documentation from your vendor for details about generating Kerberos keytabs. principal The Principal is set to amqp/[email protected] . This value must correspond to the service principal created in your Kerberos environment. See the documentation from your vendor for details about creating service principals. Start the broker. On Linux: On Windows: 5.5.2.1. Using an alternative configuration scope You can specify an alternative configuration scope by adding the parameter saslLoginConfigScope to the URL of an AMQP acceptor. In the following configuration example, the parameter saslLoginConfigScope is given the value alternative-sasl-gssapi . The result is an acceptor that uses the alternative scope named alternative-sasl-gssapi , declared in the <broker_instance_dir> /etc/login.config configuration file. 5.5.3. Authorizing clients with Kerberos credentials AMQ Broker includes an implementation of the JAAS Krb5LoginModule login module for use by other security modules when mapping roles. The module adds a Kerberos-authenticated Peer Principal to the Subject's principal set as an AMQ Broker UserPrincipal. The credentials can then be passed to a PropertiesLoginModule or LDAPLoginModule module, which maps the Kerberos-authenticated Peer Principal to an AMQ Broker role. Note The Kerberos Peer Principal does not exist as a broker user, only as a role member. Prerequisites You must enable the GSSAPI mechanism of an acceptor before it can authorize AMQP connections using Kerberos security credentials. Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/login.config configuration file. Add configuration for the AMQ Broker Krb5LoginModule and the LDAPLoginModule . Verify the configuration options by referring to the documentation from your LDAP provider. An example configuration is shown below. Note The version of the Krb5LoginModule shown in the preceding example is distributed with AMQ Broker and transforms the Kerberos identity into a broker identity that can be used by other AMQ modules for role mapping. Start the broker. On Linux: On Windows: Additional resources See Section 5.5.1, "Configuring network connections to use Kerberos" for more information about enabling the GSSAPI mechanism in AMQ Broker. See Section 5.2.2.1, "Configuring basic user and password authentication" for more information about PropertiesLoginModule . See Section 5.4.1, "Configuring LDAP to authenticate clients" for more information about LDAPLoginModule . 5.6. Specifying a security manager The broker uses a component called the security manager to handle authentication and authorization. AMQ Broker includes two security managers: The ActiveMQJAASSecurityManager security manager. This security manager provides integration with JAAS and Red Hat JBoss Enterprise Application Platform (JBoss EAP) security. This is the default security manager used by AMQ Broker. The ActiveMQBasicSecurityManager security manager. This basic security manager doesn't support JAAS. Instead, it supports authentication and authorization through user name and password credentials. This security manager supports adding, removing, and updating users using the management API. All user and role data is stored in the broker bindings journal. This means that any changes made to a live broker are also available to its backup broker. As an alternative to the included security managers, a system administrator might want more control over the implementation of broker security. In this case, it is also possible to specify a custom security manager in the broker configuration. A custom security manager is a user-defined class that implements the org.apache.activemq.artemis.spi.core.security.ActiveMQSecurityManager5 interface. The examples in the following sub-sections show how to configure the broker to use: The basic security manager instead of the default JAAS security manager A custom security manager 5.6.1. Using the basic security manager In addition to the default ActiveMQJAASSecurityManager security manager, AMQ Broker also includes the ActiveMQBasicSecurityManager security manager. When you use the basic security manager, all user and role data is stored in the bindings journal (or the bindings table , if you are using JDBC persistence). Therefore, if you have configured a live-backup broker group, any user management that you peform on the live broker is automatically reflected on the backup broker upon failover. This avoids the need to separately administer an LDAP server, which is the alternative way to achieve this behavior. Before you configure and use the basic security manager, be aware of the following: The basic security manager is not pluggable like the default JAAS security manager. The basic security manager does not support JAAS. Instead, it supports only authentication and authorization through user name and password credentials. AMQ Management Console requires JAAS. Therefore, if you use the basic security manager and want to use the console, you also need to configure the login.config configuration file for user and password authentication. For more information about configuring user and password authentication, see Section 5.2.2.1, "Configuring basic user and password authentication" . In AMQ Broker, user management is provided by the broker management API. This management includes the ability to add, list, update, and remove users and roles. You can perform these functions using JMX, management messages, HTTP (using Jolokia or AMQ Management Console), and the AMQ Broker command-line interface. Because the broker directly store this data, the broker must be running in order to manage users. There is no way to manually modify the bindings data. Any management access through HTTP (for example, using Jolokia or AMQ Management Console) is handled by the console JAAS login module. MBean access through JConsole or other remote JMX tools is handled by the basic security manager. Management messages are handled by the basic security manager. 5.6.1.1. Configuring the basic security manager The following procedure shows how to configure the broker to use the basic security manager. Procedure Open the <broker-instance-dir> /etc/boostrap.xml configuration file. In the security-manager element, for the class-name attribute, specify the full ActiveMQBasicSecurityManager class name. <broker xmlns="http://activemq.org/schema"> ... <security-manager class-name="org.apache.activemq.artemis.spi.core.security.ActiveMQBasicSecurityManager"> </security-manager> ... </broker> Because you cannot manually modify the bindings data that holds user and role data, and because the broker must be running to manage users, it is advisable to secure the broker upon first boot. To achieve this, define a bootstrap user whose credentials can then be used to add other users. In the security-manager element, add the bootstrapUser , bootstrapPassword , and bootstrapRole properties and specify values. For example: <broker xmlns="http://activemq.org/schema"> ... <security-manager class-name="org.apache.activemq.artemis.spi.core.security.ActiveMQBasicSecurityManager"> <property key="bootstrapUser" value="myUser"/> <property key="bootstrapPassword" value="myPass"/> <property key="bootstrapRole" value="myRole"/> </security-manager> ... </broker> bootstrapUser Name of the bootstrap user. bootstrapPassword Passsword of the boostrap user. You can also specify an encrypted password. bootstrapRole Role of the boostrap user. Note If you define the preceding properties for the bootstrap user in your configuration, those credentials are set each time that you start the broker, regardless of any changes you make while the broker is running. Open the <broker_instance_dir> /etc/broker.xml configuration file. In the broker.xml configuration file, locate the address-setting element that is defined by default for the activemq.management# address match. These default address settings are shown below. <address-setting match="activemq.management#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!--...--> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> Within the address settings for the activemq.management# address match, for the bootstrap role name that you specified earlier in this procedure, add the following required permissions: createNonDurableQueue createAddress consume manage send For example: <address-setting match="activemq.management#"> ... <permission type="createNonDurableQueue" roles="myRole"/> <permission type="createAddress" roles="myRole"/> <permission type="consume" roles="myRole"/> <permission type="manage" roles="myRole"/> <permission type="send" roles="myRole"/> </address-setting> Additional resources For more information about the ActiveMQBasicSecurityManager class, see Class ActiveMQBasicSecurityManager in the ActiveMQ Artemis Core API documentation. To learn how to encrypt passwords in configuration files, see Section 5.9, "Encrypting passwords in configuration files" . 5.6.2. Specifying a custom security manager The following procedure shows how to specify a custom security manager in your broker configuration. Procedure Open the <broker_instance_dir> /etc/boostrap.xml configuration file. In the security-manager element, for the class-name attribute, specify the class that is a user-defined implementation of the org.apache.activemq.artemis.spi.core.security.ActiveMQSecurityManager5 interface. For example: <broker xmlns="http://activemq.org/schema"> ... <security-manager class-name="com.myclass.MySecurityManager"> <property key="myKey1" value="myValue1"/> <property key="myKey2" value="myValue2"/> </security-manager> ... </broker> Additional resources For more information about the ActiveMQSecurityManager5 interface, see Interface ActiveMQSecurityManager5 in the ActiveMQ Artemis Core API documentation. 5.6.3. Running the custom security manager example program AMQ Broker includes an example program that demonstrates how to implement a custom security manager. In the example, the custom security manager logs details for authentication and authorization and then passes the details to an instance of ActiveMQJAASSecurityManager (that is, the default security manager). The following procedure shows how to run the custom security manager example program. Prerequisites Your machine must be set up to run AMQ Broker example programs. For more information, see Running the AMQ Broker examples . Procedure Navigate to the directory that contains the custom security manager example. Run the example. Note If you would prefer to manually create and start a broker instance when running the example program, replace the command in the preceding step with mvn -PnoServer verify . Additional resources For more information about the ActiveMQJAASSecurityManager class, see Class ActiveMQJAASSecurityManager in the ActiveMQ Artemis Core API documentation. 5.7. Disabling security Security is enabled by default. The following procedure shows how to disable broker security. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In the core element, set the value of security-enabled to false . <security-enabled>false</security-enabled> If necessary, specify a new value, in milliseconds, for security-invalidation-interval . The value of this property specifies when the broker periodically invalidates secure logins. The default value is 10000 . 5.8. Tracking messages from validated users To enable tracking and logging the origins of messages (for example, for security-auditing purposes), you can use the _AMQ_VALIDATED_USER message key. In the broker.xml configuration file, if the populate-validated-user option is set to true , then the broker adds the name of the validated user to the message using the _AMQ_VALIDATED_USER key. For JMS and STOMP clients, this message key maps to the JMSXUserID key. Note The broker cannot add the validated user name to a message produced by an AMQP JMS client. Modifying the properties of an AMQP message after it has been sent by a client is a violation of the AMQP protocol. For a user authenticated based on his/her SSL certificate, the validated user name populated by the broker is the name to which the certificate's Distinguished Name (DN) maps. In the broker.xml configuration file, if security-enabled is false and populate-validated-user is true , then the broker populates whatever user name, if any, that the client provides. The populate-validated-user option is false by default. You can configure the broker to reject a message that doesn't have a user name (that is, the JMSXUserID key) already populated by the client when it sends the message. You might find this option useful for AMQP clients, because the broker cannot populate the validated user name itself for messages sent by these clients. To configure the broker to reject messages without JMSXUserID set by the client, add the following configuration to the broker.xml configuration file: <reject-empty-validated-user>true</reject-empty-validated-user> By default, reject-empty-validated-user is set to false . 5.9. Encrypting passwords in configuration files By default, AMQ Broker stores all passwords in configuration files as plain text. Be sure to secure all configuration files with the correct permissions to prevent unauthorized access. You can also encrypt, or mask , the plain text passwords to prevent unwanted viewers from reading them. 5.9.1. About encrypted passwords An encrypted, or masked , password is the encrypted version of a plain text password. The encrypted version is generated by the mask command-line utility provided by AMQ Broker. For more information about the mask utility, see the command-line help documentation: To mask a password, replace its plain-text value with the encrypted one. The masked password must be wrapped by the identifier ENC() so that it is decrypted when the actual value is needed. In the following example, the configuration file <broker_instance_dir> /etc/bootstrap.xml contains masked passwords for the keyStorePassword and trustStorePassword parameters. <web bind="https://localhost:8443" path="web" keyStorePassword="ENC(-342e71445830a32f95220e791dd51e82)" trustStorePassword="ENC(32f94e9a68c45d89d962ee7dc68cb9d1)"> <app url="activemq-branding" war="activemq-branding.war"/> </web> You can use masked passwords with the following configuration files. broker.xml bootstrap.xml management.xml artemis-users.properties login.config (for use with the LDAPLoginModule ) Configuration files are found at <broker_instance_dir> /etc . Note artemis-users.properties supports only masked passwords that have been hashed. When a user is created upon broker creation, artemis-users.properties contains hashed passwords by default. The default PropertiesLoginModule will not decode the passwords in artemis-users.properties file but will instead hash the input and compare the two hashed values for password verification. Changing the hashed password to a masked password does not allow access to the AMQ Broker management console. broker.xml , bootstrap.xml , management.xml , and login.config support passwords that are masked but not hashed. 5.9.2. Encrypting a password in a configuration file The following example shows how to mask the value of cluster-password in the broker.xml configuration file. Procedure From a command prompt, use the mask utility to encrypt a password: Open the <broker_instance_dir> /etc/broker.xml configuration file containing the plain-text password that you want to mask: <cluster-password> <password> </cluster-password> Replace the plain-text password with the encrypted value: <cluster-password> 3a34fd21b82bf2a822fa49a8d8fa115d </cluster-password> Wrap the encrypted value with the identifier ENC() : <cluster-password> ENC(3a34fd21b82bf2a822fa49a8d8fa115d) </cluster-password> The configuration file now contains an encrypted password. Because the password is wrapped with the ENC() identifier, AMQ Broker decrypts it before it is used. Additional resources For more information about the configuration files included with AMQ Broker, see Section 1.1, "AMQ Broker configuration files and locations" . 5.9.3. Setting a codec key to encrypt and decrypt passwords A codec is required to encrypt and decrypt passwords. If a custom codec is not configured, the mask utility uses a default codec to encrypt passwords and AMQ Broker uses the same default codec to decrypt a password. The codec is configured with a default key, which it provides to the underlying encryption algorithm to encrypt and decrypt passwords. Using the default key exposes a risk that the key might be used by a malicious actor to decrypt your passwords. When you use the mask utility to encrypt passwords, you can specify you own key string to avoid using the default codec key. You must then set the same key string in the ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY environment variable, so the broker can decrypt the passwords. Setting the key in an environment variable makes it more secure because it is not persisted in a configuration file. In addition, you can set the key immediately before you start the broker and unset it immediately after the broker starts. Procedure Use the mask utility to encrypt each password in a configuration file. For the key parameter, specify a string of characters with which to encrypt the password. Use the same key string to encrypt each password. Warning Ensure that you keep a record of the key string that you specify when you run the mask utility to encrypt passwords. You must configure the same key value in an environment variable to allow the broker to decrypt passwords. For more information about encrypting passwords in configuration files, see Section 5.9.2, "Encrypting a password in a configuration file" . From a command prompt, set the ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY environment variable to the key string that you specified when you encrypted each password. Start the broker. Unset the ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY environment variable. Note If you unset the ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY environment variable after you start the broker, you must set it again to the same key string before you start the broker each subsequent time. 5.10. Configuring authentication and authorization caching By default, AMQ Broker stores information about successful authentication and authorization responses in separate caches. You can change the default number of entries allowed in each cache and the duration for which entries are cached. Open the <broker-instance-dir>/etc/broker.xml configuration file. To change the default maximum number of entries, 1000 , allowed in each cache, set the authentication-cache-size and the authorization-cache-size parameters. For example: <configuration> ... <core> ... <authentication-cache-size>2000</authentication-cache-size> <authorization-cache-size>1500</authorization-cache-size> ... </core> ... </configuration> Note If a cache reaches the limit set, the least recently used entry is removed from the cache. To change the default duration, 10000 milliseconds, for which entries are cached, set the security-invalidation-interval parameter. For example: <configuration> ... <core> ... <security-invalidation-interval>20000</security-invalidation-interval> ... </core> ... </configuration> Note If you set the security-invalidation-interval parameter to 0 , authentication and authorization caching is disabled.
|
[
"<acceptor name=\"artemis\">tcp://0.0.0.0:61616?sslEnabled=true;keyStorePath=../etc/broker.keystore;keyStorePassword=1234!</acceptor>",
"<acceptor name=\"artemis\">tcp://0.0.0.0:61616?sslEnabled=true;keyStorePath=../etc/broker.keystore;keyStorePassword=1234!;needClientAuth=true</acceptor>",
"<acceptor name=\"artemis\">tcp://0.0.0.0:61616?sslEnabled=true;keyStorePath=../etc/broker.keystore;keyStorePassword=1234!;needClientAuth=true;trustStorePath=../etc/client.truststore;trustStorePassword=5678!</acceptor>",
"activemq { org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient debug=false reload=true org.apache.activemq.jaas.properties.user=\"artemis-users.properties\" org.apache.activemq.jaas.properties.role=\"artemis-roles.properties\"; };",
"user1=secret user2=access user3=myPassword",
"admin=user1,user2 developer=user3",
"<jaas-security domain=\"activemq\"/>",
"activemq { org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient debug=true org.apache.activemq.jaas.properties.user=\"artemis-users.properties\" org.apache.activemq.jaas.properties.role=\"artemis-roles.properties\"; org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule sufficient debug=true org.apache.activemq.jaas.guest.user=\"guest\" org.apache.activemq.jaas.guest.role=\"restricted\"; };",
"activemq { org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule sufficient debug=true credentialsInvalidate=true org.apache.activemq.jaas.guest.user=\"guest\" org.apache.activemq.jaas.guest.role=\"guests\"; org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule requisite debug=true org.apache.activemq.jaas.properties.user=\"artemis-users.properties\" org.apache.activemq.jaas.properties.role=\"artemis-roles.properties\"; };",
"keytool -export -file <file_name> -alias broker-localhost -keystore broker.ks -storepass <password>",
"keytool -printcert -file <file_name>",
"Owner: CN=localhost, OU=broker, O=Unknown, L=Unknown, ST=Unknown, C=Unknown Issuer: CN=localhost, OU=broker, O=Unknown, L=Unknown, ST=Unknown, C=Unknown Serial number: 4537c82e Valid from: Thu Oct 19 19:47:10 BST 2006 until: Wed Jan 17 18:47:10 GMT 2007 Certificate fingerprints: MD5: 3F:6C:0C:89:A8:80:29:CC:F5:2D:DA:5C:D7:3F:AB:37 SHA1: F0:79:0D:04:38:5A:46:CE:86:E1:8A:20:1F:7B:AB:3A:46:E4:34:5C",
"Owner: `CN=localhost,\\ OU=broker,\\ O=Unknown,\\ L=Unknown,\\ ST=Unknown,\\ C=Unknown`",
"activemq { org.apache.activemq.artemis.spi.core.security.jaas.TextFileCertificateLoginModule debug=true org.apache.activemq.jaas.textfiledn.user=\"artemis-users.properties\" org.apache.activemq.jaas.textfiledn.role=\"artemis-roles.properties\"; };",
"system=CN=system,O=Progress,C=US user=CN=humble user,O=Progress,C=US guest=CN=anon,O=Progress,C=DE",
"admins=system users=system,user guests=guest",
"<jaas-security domain=\"activemq\"/>",
"amqps://localhost:5500",
"amqps://localhost:5500?sslEnabled=true",
"amqps://localhost:5500?sslEnabled=true&trustStorePath= <trust_store_path> &trustStorePassword= <trust_store_password> &keyStorePath= <key_store_path> &keyStorePassword= <key_store_password>",
"amqps://localhost:5500?sslEnabled=true&trustStorePath= <trust_store_path> &trustStorePassword= <trust_store_password> &keyStorePath= <key_store_path> &keyStorePassword= <key_store_password> &saslMechanisms=EXTERNAL",
"<security-settings> <security-setting match=\"my.destination\"> <permission type=\"send\" roles=\"producer\"/> </security-setting> </security-settings>",
"<security-settings> <security-setting match=\"my.destination\"> <permission type=\"consume\" roles=\"consumer\"/> </security-setting> </security-settings>",
"<security-settings> <security-setting match=\"#\"> <permission type=\"createDurableQueue\" roles=\"guest\"/> <permission type=\"deleteDurableQueue\" roles=\"guest\"/> <permission type=\"createNonDurableQueue\" roles=\"guest\"/> <permission type=\"deleteNonDurableQueue\" roles=\"guest\"/> <permission type=\"createAddress\" roles=\"guest\"/> <permission type=\"deleteAddress\" roles=\"guest\"/> <permission type=\"send\" roles=\"guest\"/> <permission type=\"browse\" roles=\"guest\"/> <permission type=\"consume\" roles=\"guest\"/> <permission type=\"manage\" roles=\"guest\"/> </security-setting> </security-settings>",
"<security-setting match=\"globalqueues.europe.#\"> <permission type=\"createDurableQueue\" roles=\"admin\"/> <permission type=\"deleteDurableQueue\" roles=\"admin\"/> <permission type=\"createNonDurableQueue\" roles=\"admin, guest, europe-users\"/> <permission type=\"deleteNonDurableQueue\" roles=\"admin, guest, europe-users\"/> <permission type=\"send\" roles=\"admin, europe-users\"/> <permission type=\"consume\" roles=\"admin, europe-users\"/> </security-setting>",
"<security-setting match=\"globalqueues.europe.orders.#\"> <permission type=\"send\" roles=\"europe-users\"/> <permission type=\"consume\" roles=\"europe-users\"/> </security-setting>",
"<address name=\"ExampleQueue\"> <anycast> <queue name=\"ExampleQueue\" user=\"admin\"/> </anycast> </address>",
"<role-access> <match domain=\"org.apache.activemq.artemis\"> <access method=\"list*\" roles=\"view,update,amq\"/> <access method=\"get*\" roles=\"view,update,amq\"/> <access method=\"is*\" roles=\"view,update,amq\"/> <access method=\"set*\" roles=\"update,amq\"/> <access method=\"*\" roles=\"amq\"/> </match> </role-access>",
"<access method=\"listMessages\" roles=\"view,update,amq\"/>",
"<match domain=\"org.apache.activemq.artemis\" key=\"subcomponent=queues\"> <access method=\"list*\" roles=\"view,update,amq\"/> <access method=\"get*\" roles=\"view,update,amq\"/> <access method=\"is*\" roles=\"view,update,amq\"/> <access method=\"set*\" roles=\"update,amq\"/> <access method=\"*\" roles=\"amq\"/> </match>",
"<match domain=\"org.apache.activemq.artemis\" key=\"queue=exampleQueue\"> <access method=\"list*\" roles=\"view,update,amq\"/> <access method=\"get*\" roles=\"view,update,amq\"/> <access method=\"is*\" roles=\"view,update,amq\"/> <access method=\"set*\" roles=\"update,amq\"/> <access method=\"*\" roles=\"amq\"/> </match>",
"<match domain=\"org.apache.activemq.artemis\" key=\"queue=example*\"> <access method=\"list*\" roles=\"view,update,amq\"/> <access method=\"get*\" roles=\"view,update,amq\"/> <access method=\"is*\" roles=\"view,update,amq\"/> <access method=\"set*\" roles=\"update,amq\"/> <access method=\"*\" roles=\"amq\"/> </match>",
"<match domain=\"org.apache.activemq.artemis\" key=\"queue=example*\">",
"<match domain=\"org.apache.activemq.artemis\" key=\"queue=example.sub*\">",
"<whitelist> <entry domain=\"hawtio\"/> </whitelist>",
"<resource-limit-settings> <resource-limit-setting match=\"myUser\"> <max-connections>5</max-connections> <max-queues>3</max-queues> </resource-limit-setting> </resource-limit-settings>",
"<security-settings> <security-setting match=\"#\"> <permission type=\"createDurableQueue\" roles=\"user\"/> <permission type=\"deleteDurableQueue\" roles=\"user\"/> <permission type=\"createNonDurableQueue\" roles=\"user\"/> <permission type=\"deleteNonDurableQueue\" roles=\"user\"/> <permission type=\"send\" roles=\"user\"/> <permission type=\"consume\" roles=\"user\"/> </security-setting> </security-settings>",
"activemq { org.apache.activemq.artemis.spi.core.security.jaas.LDAPLoginModule required debug=true initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connectionURL=\"LDAP://localhost:389\" connectionUsername=\"CN=Administrator,CN=Users,OU=System,DC=example,DC=com\" connectionPassword=redhat.123 connectionProtocol=s connectionTimeout=\"5000\" authentication=simple userBase=\"dc=example,dc=com\" userSearchMatching=\"(CN={0})\" userSearchSubtree=true readTimeout=\"5000\" roleBase=\"dc=example,dc=com\" roleName=cn roleSearchMatching=\"(member={0})\" roleSearchSubtree=true ; };",
"activemq { org.apache.activemq.artemis.spi.core.security.jaas.LDAPLoginModule required debug=true initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connectionURL=\"ldap://localhost:10389\" connectionUsername=\"uid=admin,ou=system\" connectionPassword=secret connectionProtocol=s connectionTimeout=5000 authentication=simple userBase=\"dc=example,dc=com\" userSearchMatching=\"(uid={0})\" userSearchSubtree=true userRoleName= readTimeout=5000 roleBase=\"dc=example,dc=com\" roleName=cn roleSearchMatching=\"(member={0})\" roleSearchSubtree=true ; };",
"<security-settings> <security-setting-plugin class-name=\"org.apache.activemq.artemis.core.server.impl.LegacyLDAPSecuritySettingPlugin\"> <setting name=\"initialContextFactory\" value=\"com.sun.jndi.ldap.LdapCtxFactory\"/> <setting name=\"connectionURL\" value=\"ldap://localhost:1024\"/>`ou=destinations,o=ActiveMQ,ou=system` <setting name=\"connectionUsername\" value=\"uid=admin,ou=system\"/> <setting name=\"connectionPassword\" value=\"secret\"/> <setting name=\"connectionProtocol\" value=\"s\"/> <setting name=\"authentication\" value=\"simple\"/> </security-setting-plugin> </security-settings>",
"<broker_instance_dir> /bin/artemis mask <password>",
"result: 3a34fd21b82bf2a822fa49a8d8fa115d",
"connectionPassword = <password>",
"connectionPassword = 3a34fd21b82bf2a822fa49a8d8fa115d",
"connectionPassword = \"ENC(3a34fd21b82bf2a822fa49a8d8fa115d)\"",
"<security-settings> <role-mapping from=\"cn=admins,ou=Group,ou=ActiveMQ,ou=system\" to=\"my-admin-role\"/> <role-mapping from=\"cn=users,ou=Group,ou=ActiveMQ,ou=system\" to=\"my-user-role\"/> </security-settings>",
"<broker_instance_dir> /bin/artemis stop",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"<acceptor name=\"amqp\"> tcp://0.0.0.0:5672?protocols=AMQP;saslMechanisms=GSSAPI </acceptor>",
"<acceptor name=\"amqp\"> tcp://0.0.0.0:5672?protocols=AMQP;saslMechanisms=GSSAPI,PLAIN </acceptor>",
"<broker_instance_dir> /bin/artemis run",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"<broker_instance_dir> /bin/artemis stop",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"amqp-sasl-gssapi { com.sun.security.auth.module.Krb5LoginModule required isInitiator=false storeKey=true useKeyTab=true principal=\"amqp/[email protected]\" debug=true; };",
"<broker_instance_dir> /bin/artemis run",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"<acceptor name=\"amqp\"> tcp://0.0.0.0:5672?protocols=AMQP;saslMechanisms=GSSAPI,PLAIN;saslLoginConfigScope=alternative-sasl-gssapi` </acceptor>",
"<broker_instance_dir> /bin/artemis stop",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"org.apache.activemq.artemis.spi.core.security.jaas.Krb5LoginModule required ; org.apache.activemq.artemis.spi.core.security.jaas.LDAPLoginModule optional initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connectionURL=\"ldap://localhost:1024\" authentication=GSSAPI saslLoginConfigScope=broker-sasl-gssapi connectionProtocol=s userBase=\"ou=users,dc=example,dc=com\" userSearchMatching=\"(krb5PrincipalName={0})\" userSearchSubtree=true authenticateUser=false roleBase=\"ou=system\" roleName=cn roleSearchMatching=\"(member={0})\" roleSearchSubtree=false ;",
"<broker_instance_dir> /bin/artemis run",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"<broker xmlns=\"http://activemq.org/schema\"> <security-manager class-name=\"org.apache.activemq.artemis.spi.core.security.ActiveMQBasicSecurityManager\"> </security-manager> </broker>",
"<broker xmlns=\"http://activemq.org/schema\"> <security-manager class-name=\"org.apache.activemq.artemis.spi.core.security.ActiveMQBasicSecurityManager\"> <property key=\"bootstrapUser\" value=\"myUser\"/> <property key=\"bootstrapPassword\" value=\"myPass\"/> <property key=\"bootstrapRole\" value=\"myRole\"/> </security-manager> </broker>",
"<address-setting match=\"activemq.management#\"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!--...--> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting>",
"<address-setting match=\"activemq.management#\"> <permission type=\"createNonDurableQueue\" roles=\"myRole\"/> <permission type=\"createAddress\" roles=\"myRole\"/> <permission type=\"consume\" roles=\"myRole\"/> <permission type=\"manage\" roles=\"myRole\"/> <permission type=\"send\" roles=\"myRole\"/> </address-setting>",
"<broker xmlns=\"http://activemq.org/schema\"> <security-manager class-name=\"com.myclass.MySecurityManager\"> <property key=\"myKey1\" value=\"myValue1\"/> <property key=\"myKey2\" value=\"myValue2\"/> </security-manager> </broker>",
"cd <install_dir> /examples/features/standard/security-manager",
"mvn verify",
"<security-enabled>false</security-enabled>",
"<reject-empty-validated-user>true</reject-empty-validated-user>",
"<broker_instance_dir> /bin/artemis help mask",
"<web bind=\"https://localhost:8443\" path=\"web\" keyStorePassword=\"ENC(-342e71445830a32f95220e791dd51e82)\" trustStorePassword=\"ENC(32f94e9a68c45d89d962ee7dc68cb9d1)\"> <app url=\"activemq-branding\" war=\"activemq-branding.war\"/> </web>",
"<broker_instance_dir> /bin/artemis mask <password>",
"result: 3a34fd21b82bf2a822fa49a8d8fa115d",
"<cluster-password> <password> </cluster-password>",
"<cluster-password> 3a34fd21b82bf2a822fa49a8d8fa115d </cluster-password>",
"<cluster-password> ENC(3a34fd21b82bf2a822fa49a8d8fa115d) </cluster-password>",
"<broker_instance_dir> /bin/artemis mask --key <key> <password>",
"export ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY= <key>",
"./artemis run",
"unset ARTEMIS_DEFAULT_SENSITIVE_STRING_CODEC_KEY",
"<configuration> <core> <authentication-cache-size>2000</authentication-cache-size> <authorization-cache-size>1500</authorization-cache-size> </core> </configuration>",
"<configuration> <core> <security-invalidation-interval>20000</security-invalidation-interval> </core> </configuration>"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/configuring_amq_broker/assembly-br-securing-brokers_configuring
|
Chapter 14. Networking Tapset
|
Chapter 14. Networking Tapset This family of probe points is used to probe the activities of the network device and protocol layers.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/networking-dot-stp
|
Chapter 3. Uninstalling OpenShift Serverless Knative Serving
|
Chapter 3. Uninstalling OpenShift Serverless Knative Serving Before you can remove the OpenShift Serverless Operator, you must remove Knative Serving. To uninstall Knative Serving, you must remove the KnativeServing custom resource (CR) and delete the knative-serving namespace. 3.1. Uninstalling Knative Serving Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on OpenShift Dedicated. Install the OpenShift CLI ( oc ). Procedure Delete the KnativeServing CR: USD oc delete knativeservings.operator.knative.dev knative-serving -n knative-serving After the command has completed and all pods have been removed from the knative-serving namespace, delete the namespace: USD oc delete namespace knative-serving
|
[
"oc delete knativeservings.operator.knative.dev knative-serving -n knative-serving",
"oc delete namespace knative-serving"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/removing_openshift_serverless/uninstalling-knative-serving
|
Chapter 1. Red Hat build of Keycloak 24.0
|
Chapter 1. Red Hat build of Keycloak 24.0 1.1. Overview Red Hat is proud to introduce a new era of identity and access management named Red Hat build of Keycloak. Red Hat build of Keycloak is based on the Keycloak project, which enables you to secure your web applications by providing Web SSO capabilities based on popular standards such as OpenID Connect, OAuth 2.0, and SAML 2.0. The Red Hat build of Keycloak server acts as an OpenID Connect or SAML-based identity provider (IdP), allowing your enterprise user directory or third-party IdP to secure your applications by using standards-based security tokens. While preserving the power and functionality of Red Hat Single Sign-on, Red Hat build of Keycloak is faster, more flexible, and efficient. Red Hat build of Keycloak is an application built with Quarkus, which provides developers with flexibility and modularity. Quarkus provides a framework that is optimized for a container-first approach and provides many features for developing cloud-native applications. 1.2. Updates for 24.0.10 This release contains several fixed issues . 1.3. Updates for 24.0.9 This release contains several fixed issues , some known issues , and the following additional changes. 1.3.1. CVE fixes CVE-2024-10451 Sensitive Data Exposure in Keycloak Build Process CVE-2024-10270 Keycloak Denial of Service CVE-2024-10492 Keycloak path traversal CVE-2024-9666 Keycloak proxy header handling Denial-of-Service [DoS] vulnerability CVE-2024-10039 Keycloak TLS passthrough 1.3.2. User attribute searches are now case-sensitive When searching for users by user attribute, Red Hat build of Keycloak no longer searches for user attribute names forcing lower case comparisons. The goal of this change was to speed up searches by using the native index for Red Hat build of Keycloak on the user attribute table. If your database collation is case-insensitive, your search results will stay the same. If your database collation is case-sensitive, you might see fewer search results than before. For more details, see Miscellaneous changes . 1.3.3. Updates to documentation of X.509 client certificate lookup by proxy Potential vulnerable configurations have been identified in the X.509 client certificate lookup when using a reverse proxy. If you have configured the client certificate lookup by a proxy header, additional configuration steps might be required. For more detail, see Enabling client certificate lookup . 1.3.4. Security improvements for the key resolvers While using the REALM_FILESEPARATOR_KEY key resolver, Red Hat build of Keycloak now restricts access to FileVault secrets outside of its realm. Characters that could cause path traversal when specifying the expression placeholder in the Admin Console are now prohibited. Additionally, the KEY_ONLY key resolver now escapes the _ character to prevent reading secrets that would otherwise be linked to another realm when the REALM_UNDERSCORE_KEY resolver is used. The escaping simply replaces _ with __ , so, for example, USD{vault.my_secret} now looks for a file named my__secret . Because this is a breaking change, a warning is logged to ease the transition. 1.4. Updates for 24.0.8 This release contains several fixed issues and CVE fixes. 1.4.1. CVE fixes CVE-2024-8698 Improper verification of SAML responses leading to privilege escalation in Red Hat build of Keycloak CVE-2024-8883 Vulnerable redirect URI validation results in Open Redirect 1.5. Updates for 24.0.7 This release contains several fixed issues and the following additional changes. 1.5.1. CVE fixes CVE-2024-7318 One Time Passcode (OTP) is valid longer (double) than expiration timeSeverity. CVE-2024-7260 Open Redirect on Account page vulnerability could lead a user to visit a malicious webpage. CVE-2024-7341 Session fixation in elytron SAML adapters for better protection against a possible Cookie hijacking. 1.5.2. Concurrent login requests blocked by brute force Prior to this release, if an attacker launched many parallel login attempts, the attacker had more chances to guess a password than permitted by brute force. This situation happened because the brute force check occurred before the Brute Force Protector locked the user. In this release, the Brute Force Protector rejects all login attempts that occur while another login is in progress in the same server. If you need to disable this feature, issue the following command: bin/kc.[sh|bat] start --spi-brute-force-protector-default-brute-force-detector-allow-concurrent-requests=true 1.6. Updates for 24.0.6 This release contains several fixed issues and the following additional changes. 1.6.1. Improved performance for user consent deletion When a client scope or the full realm is deleted, the associated user consents should also be removed. A new index over the USER_CONSENT_CLIENT_SCOPE table has been added to increase the performance. Note that, if the table contains more than 300,000 entries, Red Hat build of Keycloak skips the creation of the indexes during the automatic schema migration and logs the SQL statements to the console instead. The statements must be run manually in the database after Red Hat build of Keycloak startup. 1.6.2. Change for LDAP Connection Pool configuration In this release, the LDAP connection pool configuration relies solely on system properties. The main reason is that the LDAP connection pool configuration is a JVM-level configuration rather than specific to an individual realm or LDAP provider instance. Compared to releases, any realm configuration related to the LDAP connection pool will be ignored. If you are migrating from versions where any of the following settings are set to your LDAP providers, consider using these system properties instead: connectionPoolingAuthentication connectionPoolingInitSize connectionPoolingMaxSize connectionPoolingPrefSize connectionPoolingTimeout connectionPoolingProtocol connectionPoolingDebug For more details, see Configuring the connection pool . 1.7. Updates for 24.0.5 This release contains several fixed issues and a CVE fix. 1.7.1. CVE fix The release includes a fix for CVE-2024-4540 , which addresses a flaw related to OAuth 2.0 Pushed Authorization Requests. This security issue affects some OIDC confidential clients using PAR (Pushed authorization request). If you use OIDC confidential clients together with PAR and you use client authentication based on client_id and client_secret sent as parameters in the HTTP request body (method client_secret_post specified in the OIDC specification), it is highly encouraged to rotate the client secrets of your clients after upgrading to this version. 1.8. Updates for 24.0.4 This release includes Fixed issues and following update. 1.8.1. Change for updating users through the Admin User API When updating user attributes through the Admin User API, you can no longer execute partial updates when updating the user attributes, including the root attributes such as username , email , firstName , and lastName . This feature is no longer supported. 1.9. Updates for 22.0 If you are migrating from Red Hat Single Sign-On 7.6, other new features were added at Red Hat build of Keycloak version 22. For details, see the version 22 Release Notes . 1.10. New features and enhancements The following release notes apply to Red Hat build of Keycloak 24.0.3, the first 24.0 release of the product. 1.10.1. User profile and progressive profiling The user profile preview feature is promoted to be fully supported and user profile is enabled by default. The following are a few highlights of this feature; Fine-grained control over the attributes that users and administrators can manage so that you can prevent unexpected attributes and values from being set. Ability to specify what user attributes are managed and should be displayed on the forms to regular users or administrators. Dynamic forms - Previously, the forms where users created or updated their profiles, contain four basic attributes like username, email, first name and last name. The addition of any attributes (or removing some default attributes) required you to create a custom theme. Now custom themes may not be needed because users see exactly the requested attributes based on the requirement of the particular deployment. Validations - Ability to specify validators for the user attributes including built-in validators that you can use to specify a maximum or minimum length, a specific regex, or limiting a particular attribute to be a URL or number. Annotations - Ability to specify that a particular attribute should be rendered for instance as a text area, an HTML select with specified options, or calendar or many other options. You can also bind JavaScript code to a specific field to change how an attribute is rendered and customize its behavior. Progressive profiling - Ability to specify that some fields are required or available on the forms just for particular values of scope parameter. This effectively allow progressive profiling. You no longer need to ask the user for twenty attributes during registration; you can instead ask the user to fill in attributes incrementally according to the requirements of the individual client applications that are used by the user. Migration from versions - The user profile is now always enabled, but it operates as before for those who did not use this feature. You can benefit from the user profile capabilities, but you are not required to use them. For migration instructions, see the Upgrading Guide . The first release of the user profile as a supported feature is just the starting point and the baseline for delivering many more capabilities around identity management. For more details about user profile capabilities, see the Server Administration Guide . 1.10.1.1. Breaking changes to the User Profile SPI In this release, changes to the User Profile SPI might impact existing implementations based on this SPI. For more details, see the Upgrading Guide . 1.10.1.2. Changes to Freemarker templates to render pages based on the user profile and realm In this release, the following templates were updated to make it possible to dynamically render attributes based on the user profile configuration set to a realm: login-update-profile.ftl register.ftl update-email.ftl For more details, see the Upgrading Guide . 1.10.1.3. New Freemarker template for the update profile page at first login through a broker In this release, the server renders the update profile page when the user is authenticating through a broker for the first time using the idp-review-user-profile.ftl template. For more details, see the Upgrading Guide . 1.10.2. Multi-site active-passive deployments Deploying Red Hat build of Keycloak to multiple independent sites is essential for some environments to provide high availability and a speedy recovery from failures. This release supports active-passive deployments for Red Hat build of Keycloak. To get started, use the High Availability Guide which also includes a comprehensive blueprint to deploy a highly available Red Hat build of Keycloak to a cloud environment. 1.10.3. Account Console version 3 Account Console version 3 has built-in support for the user profile feature, which allows administrators to configure which attributes are available to users in the Account Console, and lands a user directly on their personal account page after logging in. If you are using or extending the customization features of this theme, you may need to perform additional migrations. For more details, see the Upgrading Guide . Account Console version 2 is deprecated and will be removed in a subsequent release. 1.10.4. Welcome Page redesign The Welcome page that appears at the first use of Red Hat build of Keycloak is redesigned. It provides a better setup experience and conforms to the latest version of PatternFly . The simplified page layout includes only a form to register the first administrative user. After completing the registration, the user is sent directly to the Admin Console. If you use a custom theme, you may need to update it to support the new welcome page. For details, see the Upgrading Guide . 1.10.5. Enhanced reverse proxy settings It is now possible to separately enable parsing of either Forwarded or X-Forwarded-* headers by using the new --proxy-headers option. For details, see Using a reverse proxy . The original --proxy option is now deprecated and will be removed in a future release. For migration instructions, see the Upgrading Guide . 1.10.6. OAuth/OIDC related improvements 1.10.6.1. Lightweight access tokens support This release contains support for Lightweight access tokens. As a result, you can have smaller access tokens for specified clients. These tokens have only a few claims, which is why they are smaller. Note that lightweight access token is still JWT signed by the realm key by default and still contains some very basic claims. This release introduces an Add to lightweight access token flag that is available on some OIDC protocol mappers. Use this flag to specify if a particular claim should be added to a lightweight access token. It is OFF by default, which means that most claims are not added. Also, a client policy executor exists. Use it to specify if a particular client request should use lightweight access tokens or regular access tokens. An alternative to the executor is to use an Always use lightweight access token flag on client advanced settings, which causes that client to always use lightweight access tokens. An executor can be an alternative if you need more flexibility. For instance, you may choose to use lightweight access tokens by default but use regular tokens only for the specified scope parameter. In versions, introspection endpoint automatically returned most claims, which were available in the access token. Now most of protocol mappers include a new Add to token introspection switch . This addition allows more flexibility because an introspection endpoint can return different claims than an access token. This change is a first step towards "Lightweight access tokens" support because access tokens can omit lots of the claims, which would be still returned by the introspection endpoint. When migrating from versions, the introspection endpoint should return the same claims that are returned from access token, so the behavior should be effectively the same by default after the upgrade. For more details, see Using lightweight access tokens . 1.10.6.2. OAuth 2.1 support This release contains optional OAuth 2.1 support. New client policy profiles were introduced in this release, which administrators can use to make sure that clients and particular client requests comply with the OAuth 2.1 specification. This release includes a dedicated client profile for confidential clients and a dedicated profile for public clients. For more details, see OAuth 2.1 support . 1.10.6.3. Scope parameter supported in the refresh token flow Starting with this release, the scope parameter in the OAuth2/OIDC endpoint for token refresh is supported. Use this parameter to request access tokens with a smaller amount of scopes than originally granted, which means you cannot increase access token scope. This scope limitation does not affect the scope of the refreshed refresh token. This function works as described in the OAuth2 specification. For more details, see the Server Administration Guide . 1.10.6.4. Client policy executor for secure redirect URIs A new client policy executor secure-redirect-uris-enforcer is introduced. Use it to restrict which redirect URIs can be used by the clients. For instance, you can specify that client redirect URIs cannot have wildcards, should be just from specific domain, must be OAuth 2.1 compliant, and so on. For more details, see Client Policies . 1.10.6.5. Client policy executor for enforcing DPoP A new client policy executor dpop-bind-enforcer is introduced. You can use it to enforce DPoP for a particular client if dpop preview is enabled. For more details, see Client Policies . 1.10.6.6. Supporting EdDSA You can create EdDSA realm keys and use them as signature algorithms for various clients. For instance, you can use these keys to sign tokens or for client authentication with signed JWT. This feature includes identity brokering where Red Hat build of Keycloak itself signs client assertions that are used for private_key_jwt authentication to third party identity providers. For more details, see Configuring Realm keys 1.10.6.7. EC Keys supported by JavaKeystore provider The provider JavaKeystoreProvider for providing realm keys now supports EC keys in addition to previously supported RSA keys. For more details, see Configuring Realm keys 1.10.6.8. Option to add X509 thumbprint to JWT when using private_key_jwt authentication for identity providers OIDC identity providers now have the Add X.509 Headers to the JWT option for the situation when client authentication with JWT signed by private key is used. This option can be useful for interoperability with some identity providers such as Azure AD, which require the thumbprint to be present on the JWT. For more details, see Integrating identity providers . 1.10.6.9. OAuth Grant Type SPI The Red Hat build of Keycloak codebase includes an internal update to introduce the OAuth Grant Type SPI. This update allows additional flexibility when introducing custom grant types supported by the Red Hat build of Keycloak OAuth 2 token endpoint. For more details, see Authorization services . 1.10.6.10. FAPI 2 drafts support Red Hat build of Keycloak has new client profiles fapi-2-security-profile and fapi-2-message-signing , which ensure Red Hat build of Keycloak enforces compliance with the latest FAPI 2 draft specifications when communicating with your clients. For more details, see Client Policies . 1.10.6.11. DPoP preview support Red Hat build of Keycloak has preview for support for OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP). 1.10.6.12. Feature flag for OAuth 2.0 device authorization grant flow The OAuth 2.0 device authorization grant flow now includes a feature flag, so you can easily disable this feature. This feature is still enabled by default. For more details, see Device authorization grant . 1.10.7. Authentication 1.10.7.1. Passkeys support Red Hat build of Keycloak has preview support for Passkeys . Passkey registration and authentication are realized by the features of WebAuthn. Therefore, users of Red Hat build of Keycloak can do Passkey registration and authentication by existing WebAuthn registration and authentication. Both synced Passkeys and device-bound Passkeys can be used for both Same-Device and Cross-Device Authentication. However, Passkeys operations success depends on the user's environment. Make sure which operations can succeed in the environment . 1.10.7.2. WebAuthn improvements WebAuthn policy includes a new field: Extra Origins . It provides better interoperability with non-Web platforms (for example, native mobile applications). 1.10.7.3. You are already logged-in This release addresses an issue concerning when a user has a login page open in multiple browser tabs and is authenticated in one browser tab. When the user tried to authenticate in another browser tab, a message appeared: You are already logged-in . This situation is improved now as other browser tabs automatically authenticate the user after authentication in the first tab. However, more improvements are needed. For example, when an authentication session expires and is restarted in one browser tab, other browser tabs do not follow automatically with the login. 1.10.7.4. Password policy for specify Maximum authentication time Red Hat build of Keycloak supports a new password policy that allows you to specify the maximum age of an authentication with which a password may be changed by a user without re-authentication. When this password policy is set to 0, the user is required to re-authenticate to change the password in the Account Console or by other means. You can also specify a lower or higher value than the default value of 5 minutes. 1.10.8. Server distribution 1.10.8.1. Load Shedding support Red Hat build of Keycloak now features the http-max-queued-requests option to allow proper rejection of incoming requests under high load. For details, see the Server Guide . 1.10.8.2. RESTEasy Reactive Red Hat build of Keycloak has switched to RESTEasy Reactive. Applications using quarkus-resteasy-reactive should still benefit from a better startup time, runtime performance, and memory footprint, even though not using reactive style/semantics. SPIs that depend directly on JAX-RS API should be compatible with this change. SPIs that depend on RESTEasy Classic including ResteasyClientBuilder will not be compatible and will require an update. This update will also be needed for other implementation of the JAX-RS API like Jersey. 1.10.9. Keycloak CR 1.10.9.1. Keycloak CR Optimized Field The Keycloak CR now includes an startOptimized field, which may be used to override the default assumption about whether to use the --optimized flag for the start command. As a result, you can use the CR to configure build time options also when a custom Keycloak image is used. 1.10.9.2. Keycloak CR resources options The Keycloak CR now allows for specifying the resources options for managing compute resources for the Keycloak container. It provides the ability to request and limit resources independently for the main Red Hat build of Keycloak deployment by using the Keycloak CR, and for the realm import Job by using the Realm Import CR. When no values are specified, the default requests memory is set to 1700MiB , and the limits memory is set to 2GiB . You can specify your custom values based on your requirements as follows: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... resources: requests: cpu: 1200m memory: 896Mi limits: cpu: 6 memory: 3Gi For more details, see the Operator Guide . 1.10.9.3. Keycloak CR cache-config-file option The Keycloak CR now allows for specifying the cache-config-file option by using the cache spec configMapFile field, for example: apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... cache: configMapFile: name: my-configmap key: config.xml 1.10.10. Versioned Features Features now support versioning. To preserve backward compatibility, all existing features (including account2 and account3 ) are marked as version 1. Newly introduced features will use versioning, which means that users can select between different implementations of desired features. For details, see the Server Guide . 1.10.10.1. Keycloak CR Truststores You may also take advantage of the new server-side handling of truststores by using the Keycloak CR, for example: spec: truststores: mystore: secret: name: mystore-secret myotherstore: secret: name: myotherstore-secret Currently only Secrets are supported. 1.10.10.2. Trust Kubernetes CA The cert for the Kubernetes CA is added automatically to your Red Hat build of Keycloak Pods managed by the Operator. 1.10.11. Group scalability Performance around searching of groups is improved for the use-cases with many groups and subgroups. There are improvements, which allow paginated lookup of subgroups. 1.10.12. Keycloak JS 1.10.12.1. Using exports field in package.json The Red Hat build of Keycloak JS adapter now uses the exports field in its package.json . This change improves support for more modern bundlers like Webpack 5 and Vite, but comes with some unavoidable breaking changes. See the Upgrading Guide for more details. 1.10.12.2. PKCE enabled by default The Red Hat build of Keycloak JS adapter now sets the pkceMethod option to S256 by default. This change enables Proof Key Code Exchange ( PKCE ) for all applications using the adapter. If you use the adapter on a system that does not support PKCE, you can set the pkceMethod option to false to disable it. 1.10.13. Changes to Password Hashing In this release, we adapted the password hashing defaults to match the OWASP recommendations for Password Storage . As part of this change, the default password hashing provider has changed from pbkdf2-sha256 to pbkdf2-sha512 . Also, the number of default hash iterations for pbkdf2 based password hashing algorithms changed. This change means better security aligned with latest recommendations, but it has impact on performance. It is possible to stick to the old behavior by adding password policies hashAlgorithm and hashIterations to your realm. For more details, see the Upgrading Guide . 1.10.14. Truststore improvements Red Hat build of Keycloak introduces improved truststores configuration options. The Red Hat build of Keycloak truststore is now used across the server, including outgoing connections, mTLS, and database drivers. You no longer need to configure separate truststores for individual areas. To configure the truststore, you can put your truststores files or certificates in the default conf/truststores , or use the new truststore-paths config option. For details, see the Server Guide . 1.10.15. More changes 1.10.15.1. Automatic certificate management for SAML identity providers The SAML identity providers can now be configured to automatically download the signing certificates from the IDP entity metadata descriptor endpoint. In order to use the new feature, configure the Metadata descriptor URL option in the provider (the URL where the IDP metadata information with the certificates is published) and set Use metadata descriptor URL to ON . The certificates are automatically downloaded and cached in the public-key-storage SPI from that URL. The certificates can also be reloaded or imported from the Admin Console, using the action combo in the provider page. See the Server Administration Guide for more details about the new options. 1.10.15.2. Non-blocking health check for load balancers A new health check endpoint available at /lb-check was added. The execution is running in the event loop, which means this check is responsive also in overloaded situations when Red Hat build of Keycloak needs to handle many requests waiting in request queue. This behavior is useful, for example, in multi-site deployment to avoid failing over to another site that is under heavy load. The endpoint is currently checking availability of the embedded and external Infinispan caches. Other checks may be added later. This endpoint is not available by default. To enable it, run Keyloak with the multi-site feature. For more details, see Enabling and disabling features . 1.10.15.3. Changes to the user representation in both Admin API and Account contexts In this release, we are encapsulating the root user attributes (such as username , email , firstName , lastName , and locale ) by moving them to a base/abstract class in order to align how these attributes are marshalled and unmarshalled when using both Admin and Account REST APIs. This strategy provides consistency in how attributes are managed by clients and makes sure they conform to the user profile configuration set to a realm. For more details, see the Upgrading Guide . 1.10.15.4. Partial update to user attributes when updating users through the Admin User API is no longer supported When updating user attributes through the Admin User API, you cannot execute partial updates when updating the user attributes, including the root attributes such as username , email , firstName , and lastName . For more details, see the Upgrading Guide . 1.10.15.5. Sequential loading of offline sessions and remote sessions Starting with this release, the first member of a Red Hat build of Keycloak cluster will load remote sessions sequentially instead of in parallel. If offline session preloading is enabled, those will be loaded sequentially as well. For more details, see the Upgrading Guide . 1.10.15.6. Performing actions on behalf of another already authenticated user is not longer possible In this release, you can no longer perform actions such as email verification if the user is already authenticated and the action is bound to another user. For instance, a user can not complete the verification email flow if the email link is bound to a different account. 1.10.15.7. Changes to the email verification flow In this release, if a user tries to follow the link to verify the email and the email was previously verified, a proper message will be shown. In addition to that, a new error ( EMAIL_ALREADY_VERIFIED ) event will be fired to indicate an attempt to verify an already verified email. You can use this event to track possible attempts to hijack user accounts in case the link has leaked or to alert users if they do not recognize the action. 1.10.15.8. Localization files for themes default to UTF-8 encoding Message properties files for themes are now read in UTF-8 encoding, with an automatic fallback to ISO-8859-1 encoding. See the Upgrading Guide for more details. 1.10.15.9. Configuration option for offline session lifespan override in memory To reduce memory requirements, we introduced a configuration option to shorten lifespan for offline sessions imported into the Infinispan caches. Currently, the offline session lifespan override is disabled by default. For more details, see the Server Administration Guide . 1.10.15.10. Infinispan metrics use labels for cache manager and cache names When enabling metrics for Red Hat build of Keycloak's embedded caches, the metrics now use labels for the cache manager and the cache names. For more details, see the Upgrading Guide . 1.10.15.11. User attribute value length extension As of this release, Red Hat build of Keycloak supports storing and searching by user attribute values longer than 255 characters, which was previously a limitation. For more details, see the Upgrading Guide . 1.10.15.12. Brute Force Protection changes There have been a couple of enhancements to the Brute Protection: When an attempt to authenticate with an OTP or Recovery Code fails due to Brute Force Protection, the active Authentication Session is invalidated. Any further attempts to authenticate with that session will fail. In versions of Red Hat build of Keycloak, the administrator had to choose between disabling users temporarily or permanently due to a Brute Force attack on their accounts. The administrator can now permanently disable a user after a given number of temporary lockouts. The property failedLoginNotBefore has been added to the brute-force/users/{userId} endpoint 1.10.15.13. Authorization Policy In versions of Red Hat build of Keycloak, when the last member of a User, Group, or Client policy was deleted then that policy would also be deleted. Unfortunately this could lead to an escalation of privileges if the policy was used in an aggregate policy. To avoid privilege escalation, the effect policies are no longer deleted and an administrator will need to update those policies. 1.10.15.14. Temporary lockout log replaced with event There is now a new event USER_DISABLED_BY_TEMPORARY_LOCKOUT when a user is temporarily locked out by the brute force protector. The log with ID KC-SERVICES0053 has been removed as the new event offers the information in a structured form. For more details, see the Upgrading Guide . 1.10.15.15. Updates to cookies Cookie handling code has been refactored and improved, including a new Cookie Provider. This provides better consistency for cookies handled by Red Hat build of Keycloak, and the ability to introduce configuration options around cookies if needed. 1.10.15.16. SAML User Attribute Mapper For NameID now suggests only valid NameID formats User Attribute Mapper For NameID allowed setting Name ID Format option to the following values: urn:oasis:names:tc:SAML:1.1:nameid-X509SubjectName urn:oasis:names:tc:SAML:1.1:nameid-WindowsDomainQualifiedName urn:oasis:names:tc:SAML:2.0:nameid-kerberos urn:oasis:names:tc:SAML:2.0:nameid-entity However, Red Hat build of Keycloak does not support receiving AuthnRequest document with one of these NameIDPolicy , therefore these mappers would never be used. The supported options were updated to only include the following Name ID Formats: urn:oasis:names:tc:SAML:1.1:nameid-emailAddress urn:oasis:names:tc:SAML:1.1:nameid-unspecified urn:oasis:names:tc:SAML:2.0:nameid-persistent urn:oasis:names:tc:SAML:2.0:nameid-transient 1.10.15.17. Different JVM memory settings when running in container Instead of specifying hardcoded values for the initial and maximum heap size, Red Hat build of Keycloak uses relative values to the total memory of a container. The JVM options -Xms and -Xmx were replaced by -XX:InitialRAMPercentage and -XX:MaxRAMPercentage . Warning It can significantly impact memory consumption, so executing particular actions might be required. For more details, see the Upgrading Guide . 1.10.15.18. Deprecated offline session preloading The default behavior of Red Hat build of Keycloak is to load offline sessions on demand. The old behavior to preload them at startup is now deprecated, as pre-loading them at startup does not scale well with a growing number of sessions, and increases Red Hat build of Keycloak memory usage. The old behavior will be removed in a future release. For more details, see the Upgrading Guide . 1.11. Fixed issues Each release includes fixed issues: Red Hat build of Keycloak 24.0.10 Fixed Issues Red Hat build of Keycloak 24.0.9 Fixed Issues Red Hat build of Keycloak 24.0.8 Fixed Issues Red Hat build of Keycloak 24.0.7 Fixed Issues Red Hat build of Keycloak 24.0.6 Fixed Issues Red Hat build of Keycloak 24.0.5 Fixed Issues Red Hat build of Keycloak 24.0.4 Fixed Issues Red Hat build of Keycloak 24.0.3 Fixed Issues 1.12. Known issues 1.12.1. Red Hat Single Sign-On 7.6 OIDC adapters issue These adapters do not work by default with Red Hat build of Keycloak 24.0. When running Red Hat Single Sign-On 7.6 OIDC adapters with Red Hat build of Keycloak 24.0, the log shows a CODE_TO_TOKEN_ERROR event. To work around this issue, make this change for each Red Hat build of Keycloak client that points to an application secured by Red Hat Single Sign-On 7.6 adapters. In the Admin Console, select the affected client. Go to the Advanced tab. Locate the OpenID Connect Compatibility Modes section. Toggle Exclude Issuer From Authentication Response to ON . For more information, see https://issues.redhat.com/browse/RHSSO-3030 . 1.12.2. SAML adapter for JBoss EAP 8.0 issue The Red Hat build of Keycloak 24.0 SAML adapter for JBoss EAP 8.0 cannot be used with the JBoss EAP Installation Manager. As a work around, use the Red Hat build of Keycloak 22.0 SAML Adapter for JBoss EAP 8.0. 1.13. Supported configurations For the supported configurations for Red Hat build of Keycloak 24.0, see Supported configurations . 1.14. Component details For the list of supported component versions for Red Hat build of Keycloak 24.0, see Component details .
|
[
"bin/kc.[sh|bat] start --spi-brute-force-protector-default-brute-force-detector-allow-concurrent-requests=true",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: resources: requests: cpu: 1200m memory: 896Mi limits: cpu: 6 memory: 3Gi",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: cache: configMapFile: name: my-configmap key: config.xml",
"spec: truststores: mystore: secret: name: mystore-secret myotherstore: secret: name: myotherstore-secret"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/release_notes/red_hat_build_of_keycloak_24_0
|
5.10. Determining Device Mapper Entries with the dmsetup Command
|
5.10. Determining Device Mapper Entries with the dmsetup Command You can use the dmsetup command to find out which device mapper entries match the multipathed devices. The following command displays all the device mapper devices and their major and minor numbers. The minor numbers determine the name of the dm device. For example, a minor number of 3 corresponds to the multipathed device /dev/dm-3 .
|
[
"dmsetup ls mpathd (253:4) mpathep1 (253:12) mpathfp1 (253:11) mpathb (253:3) mpathgp1 (253:14) mpathhp1 (253:13) mpatha (253:2) mpathh (253:9) mpathg (253:8) VolGroup00-LogVol01 (253:1) mpathf (253:7) VolGroup00-LogVol00 (253:0) mpathe (253:6) mpathbp1 (253:10) mpathd (253:5)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/dmsetup_queries
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.