title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 34. Jira Transition Issue Sink | Chapter 34. Jira Transition Issue Sink Sets a new status (transition to) of an existing issue in Jira. The Kamelet expects the following headers to be set: issueKey / ce-issueKey : as the issue unique code. issueTransitionId / ce-issueTransitionId : as the new status (transition) code. You should carefully check the project workflow as each transition may have conditions to check before the transition is made. The comment of the transition is set in the body of the message. 34.1. Configuration Options The following table summarizes the configuration options available for the jira-transition-issue-sink Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password or the API Token to access Jira string username * Username The username to access Jira string Note Fields marked with an asterisk (*) are mandatory. 34.2. Dependencies At runtime, the jira-transition-issue-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:jackson camel:jira camel:kamelet mvn:com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4.redhat-00001 34.3. Usage This section describes how you can use the jira-transition-issue-sink . 34.3.1. Knative Sink You can use the jira-transition-issue-sink Kamelet as a Knative sink by binding it to a Knative object. jira-transition-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password" 34.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 34.3.1.2. Procedure for using the cluster CLI Save the jira-transition-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-transition-issue-sink-binding.yaml 34.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 34.3.2. Kafka Sink You can use the jira-transition-issue-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jira-transition-issue-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueTransitionId" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-162" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-transition-issue-sink properties: jiraUrl: "jira server url" username: "username" password: "password" 34.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 34.3.2.2. Procedure for using the cluster CLI Save the jira-transition-issue-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-transition-issue-sink-binding.yaml 34.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-transition-issue-sink-binding timer-source?message="The new comment 123"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl="jira url"\&username="username"\&password="password" This command creates the KameletBinding in the current namespace on the cluster. 34.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-transition-issue-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTransitionId\" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-162\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-transition-issue-sink-binding.yaml",
"kamel bind --name jira-transition-issue-sink-binding timer-source?message=\"The new comment 123\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-transition-issue-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueTransitionId\" value: 701 - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-162\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-transition-issue-sink properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"",
"apply -f jira-transition-issue-sink-binding.yaml",
"kamel bind --name jira-transition-issue-sink-binding timer-source?message=\"The new comment 123\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-170 --step insert-header-action -p step-1.name=issueTransitionId -p step-1.value=5 jira-transition-issue-sink?jiraUrl=\"jira url\"\\&username=\"username\"\\&password=\"password\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/jira-transition-issue-sink |
Chapter 8. High availability for hosted control planes | Chapter 8. High availability for hosted control planes 8.1. Recovering an unhealthy etcd cluster In a highly available control plane, three etcd pods run as a part of a stateful set in an etcd cluster. To recover an etcd cluster, identify unhealthy etcd pods by checking the etcd cluster health. 8.1.1. Checking the status of an etcd cluster You can check the status of the etcd cluster health by logging into any etcd pod. Procedure Log in to an etcd pod by entering the following command: USD oc rsh -n <hosted_control_plane_namespace> -c etcd <etcd_pod_name> Print the health status of an etcd cluster by entering the following command: sh-4.4USD etcdctl endpoint health --cluster -w table Example output ENDPOINT HEALTH TOOK ERROR https://etcd-0.etcd-discovery.clusters-hosted.svc:2379 true 9.117698ms 8.1.2. Recovering a failing etcd pod Each etcd pod of a 3-node cluster has its own persistent volume claim (PVC) to store its data. An etcd pod might fail because of corrupted or missing data. You can recover a failing etcd pod and its PVC. Procedure To confirm that the etcd pod is failing, enter the following command: USD oc get pods -l app=etcd -n <hosted_control_plane_namespace> Example output NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 64m etcd-1 2/2 Running 0 45m etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m The failing etcd pod might have the CrashLoopBackOff or Error status. Delete the failing pod and its PVC by entering the following command: USD oc delete pvc/<etcd_pvc_name> pod/<etcd_pod_name> --wait=false Verification Verify that a new etcd pod is up and running by entering the following command: USD oc get pods -l app=etcd -n <hosted_control_plane_namespace> Example output NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 67m etcd-1 2/2 Running 0 48m etcd-2 2/2 Running 0 2m2s 8.2. Backing up and restoring etcd in an on-premise environment You can back up and restore etcd on a hosted cluster in an on-premise environment to fix failures. 8.2.1. Backing up and restoring etcd on a hosted cluster in an on-premise environment By backing up and restoring etcd on a hosted cluster, you can fix failures, such as corrupted or missing data in an etcd member of a three node cluster. If multiple members of the etcd cluster encounter data loss or have a CrashLoopBackOff status, this approach helps prevent an etcd quorum loss. Important This procedure requires API downtime. Prerequisites The oc and jq binaries have been installed. Procedure First, set up your environment variables and scale down the API servers: Set up environment variables for your hosted cluster by entering the following commands, replacing values as necessary: USD CLUSTER_NAME=my-cluster USD HOSTED_CLUSTER_NAMESPACE=clusters USD CONTROL_PLANE_NAMESPACE="USD{HOSTED_CLUSTER_NAMESPACE}-USD{CLUSTER_NAME}" Pause reconciliation of the hosted cluster by entering the following command, replacing values as necessary: USD oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{"spec":{"pausedUntil":"true"}}' --type=merge Scale down the API servers by entering the following commands: Scale down the kube-apiserver : USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/kube-apiserver --replicas=0 Scale down the openshift-apiserver : USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-apiserver --replicas=0 Scale down the openshift-oauth-apiserver : USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-oauth-apiserver --replicas=0 , take a snapshot of etcd by using one of the following methods: Use a previously backed-up snapshot of etcd. If you have an available etcd pod, take a snapshot from the active etcd pod by completing the following steps: List etcd pods by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd Take a snapshot of the pod database and save it locally to your machine by entering the following commands: USD ETCD_POD=etcd-0 USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl \ --cacert /etc/etcd/tls/etcd-ca/ca.crt \ --cert /etc/etcd/tls/client/etcd-client.crt \ --key /etc/etcd/tls/client/etcd-client.key \ --endpoints=https://localhost:2379 \ snapshot save /var/lib/snapshot.db Verify that the snapshot is successful by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/snapshot.db Make a local copy of the snapshot by entering the following command: USD oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/snapshot.db /tmp/etcd.snapshot.db Make a copy of the snapshot database from etcd persistent storage: List etcd pods by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd Find a pod that is running and set its name as the value of ETCD_POD: ETCD_POD=etcd-0 , and then copy its snapshot database by entering the following command: USD oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/data/member/snap/db /tmp/etcd.snapshot.db , scale down the etcd statefulset by entering the following command: USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0 Delete volumes for second and third members by entering the following command: USD oc delete -n USD{CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2 Create a pod to access the first etcd member's data: Get the etcd image by entering the following command: USD ETCD_IMAGE=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd -o jsonpath='{ .spec.template.spec.containers[0].image }') Create a pod that allows access to etcd data: USD cat << EOF | oc apply -n USD{CONTROL_PLANE_NAMESPACE} -f - apiVersion: apps/v1 kind: Deployment metadata: name: etcd-data spec: replicas: 1 selector: matchLabels: app: etcd-data template: metadata: labels: app: etcd-data spec: containers: - name: access image: USDETCD_IMAGE volumeMounts: - name: data mountPath: /var/lib command: - /usr/bin/bash args: - -c - |- while true; do sleep 1000 done volumes: - name: data persistentVolumeClaim: claimName: data-etcd-0 EOF Check the status of the etcd-data pod and wait for it to be running by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data Get the name of the etcd-data pod by entering the following command: USD DATA_POD=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} pods --no-headers -l app=etcd-data -o name | cut -d/ -f2) Copy an etcd snapshot into the pod by entering the following command: USD oc cp /tmp/etcd.snapshot.db USD{CONTROL_PLANE_NAMESPACE}/USD{DATA_POD}:/var/lib/restored.snap.db Remove old data from the etcd-data pod by entering the following commands: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm -rf /var/lib/data USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- mkdir -p /var/lib/data Restore the etcd snapshot by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- etcdutl snapshot restore /var/lib/restored.snap.db \ --data-dir=/var/lib/data --skip-hash-check \ --name etcd-0 \ --initial-cluster-token=etcd-cluster \ --initial-cluster etcd-0=https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 \ --initial-advertise-peer-urls https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 Remove the temporary etcd snapshot from the pod by entering the following command: USD oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm /var/lib/restored.snap.db Delete data access deployment by entering the following command: USD oc delete -n USD{CONTROL_PLANE_NAMESPACE} deployment/etcd-data Scale up the etcd cluster by entering the following command: USD oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3 Wait for the etcd member pods to return and report as available by entering the following command: USD oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd -w Scale up all etcd-writer deployments by entering the following command: USD oc scale deployment -n USD{CONTROL_PLANE_NAMESPACE} --replicas=3 kube-apiserver openshift-apiserver openshift-oauth-apiserver Restore reconciliation of the hosted cluster by entering the following command: USD oc patch -n USD{CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{"spec":{"pausedUntil":""}}' --type=merge 8.3. Backing up and restoring etcd on AWS You can back up and restore etcd on a hosted cluster on Amazon Web Services (AWS) to fix failures. Important Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 8.3.1. Taking a snapshot of etcd for a hosted cluster To back up etcd for a hosted cluster, you must take a snapshot of etcd. Later, you can restore etcd by using the snapshot. Important This procedure requires API downtime. Procedure Pause reconciliation of the hosted cluster by entering the following command: USD oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{"spec":{"pausedUntil":"true"}}' --type=merge Stop all etcd-writer deployments by entering the following command: USD oc scale deployment -n <hosted_cluster_namespace> --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver To take an etcd snapshot, use the exec command in each etcd container by entering the following command: USD oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db To check the snapshot status, use the exec command in each etcd container by running the following command: USD oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db Copy the snapshot data to a location where you can retrieve it later, such as an S3 bucket. See the following example. Note The following example uses signature version 2. If you are in a region that supports signature version 4, such as the us-east-2 region, use signature version 4. Otherwise, when copying the snapshot to an S3 bucket, the upload fails. Example BUCKET_NAME=somebucket FILEPATH="/USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db" CONTENT_TYPE="application/x-compressed-tar" DATE_VALUE=`date -R` SIGNATURE_STRING="PUT\n\nUSD{CONTENT_TYPE}\nUSD{DATE_VALUE}\nUSD{FILEPATH}" ACCESS_KEY=accesskey SECRET_KEY=secret SIGNATURE_HASH=`echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac USD{SECRET_KEY} -binary | base64` oc exec -it etcd-0 -n USD{HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T "/var/lib/data/snapshot.db" \ -H "Host: USD{BUCKET_NAME}.s3.amazonaws.com" \ -H "Date: USD{DATE_VALUE}" \ -H "Content-Type: USD{CONTENT_TYPE}" \ -H "Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}" \ https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{CLUSTER_NAME}-snapshot.db To restore the snapshot on a new cluster later, save the encryption secret that the hosted cluster references. Get the secret encryption key by entering the following command: USD oc get hostedcluster <hosted_cluster_name> -o=jsonpath='{.spec.secretEncryption.aescbc}' {"activeKey":{"name":"<hosted_cluster_name>-etcd-encryption-key"}} Save the secret encryption key by entering the following command: USD oc get secret <hosted_cluster_name>-etcd-encryption-key -o=jsonpath='{.data.key}' You can decrypt this key when restoring a snapshot on a new cluster. steps Restore the etcd snapshot. 8.3.2. Restoring an etcd snapshot on a hosted cluster If you have a snapshot of etcd from your hosted cluster, you can restore it. Currently, you can restore an etcd snapshot only during cluster creation. To restore an etcd snapshot, you modify the output from the create cluster --render command and define a restoreSnapshotURL value in the etcd section of the HostedCluster specification. Note The --render flag in the hcp create command does not render the secrets. To render the secrets, you must use both the --render and the --render-sensitive flags in the hcp create command. Prerequisites You took an etcd snapshot on a hosted cluster. Procedure On the aws command-line interface (CLI), create a pre-signed URL so that you can download your etcd snapshot from S3 without passing credentials to the etcd deployment: ETCD_SNAPSHOT=USD{ETCD_SNAPSHOT:-"s3://USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db"} ETCD_SNAPSHOT_URL=USD(aws s3 presign USD{ETCD_SNAPSHOT}) Modify the HostedCluster specification to refer to the URL: spec: etcd: managed: storage: persistentVolume: size: 4Gi type: PersistentVolume restoreSnapshotURL: - "USD{ETCD_SNAPSHOT_URL}" managementType: Managed Ensure that the secret that you referenced from the spec.secretEncryption.aescbc value contains the same AES key that you saved in the steps. 8.4. Disaster recovery for a hosted cluster in AWS You can recover a hosted cluster to the same region within Amazon Web Services (AWS). For example, you need disaster recovery when the upgrade of a management cluster fails and the hosted cluster is in a read-only state. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The disaster recovery process involves the following steps: Backing up the hosted cluster on the source management cluster Restoring the hosted cluster on a destination management cluster Deleting the hosted cluster from the source management cluster Your workloads remain running during the process. The Cluster API might be unavailable for a period, but that does not affect the services that are running on the worker nodes. Important Both the source management cluster and the destination management cluster must have the --external-dns flags to maintain the API server URL. That way, the server URL ends with https://api-sample-hosted.sample-hosted.aws.openshift.com . See the following example: Example: External DNS flags --external-dns-provider=aws \ --external-dns-credentials=<path_to_aws_credentials_file> \ --external-dns-domain-filter=<basedomain> If you do not include the --external-dns flags to maintain the API server URL, you cannot migrate the hosted cluster. 8.4.1. Overview of the backup and restore process The backup and restore process works as follows: On management cluster 1, which you can think of as the source management cluster, the control plane and workers interact by using the external DNS API. The external DNS API is accessible, and a load balancer sits between the management clusters. You take a snapshot of the hosted cluster, which includes etcd, the control plane, and the worker nodes. During this process, the worker nodes continue to try to access the external DNS API even if it is not accessible, the workloads are running, the control plane is saved in a local manifest file, and etcd is backed up to an S3 bucket. The data plane is active and the control plane is paused. On management cluster 2, which you can think of as the destination management cluster, you restore etcd from the S3 bucket and restore the control plane from the local manifest file. During this process, the external DNS API is stopped, the hosted cluster API becomes inaccessible, and any workers that use the API are unable to update their manifest files, but the workloads are still running. The external DNS API is accessible again, and the worker nodes use it to move to management cluster 2. The external DNS API can access the load balancer that points to the control plane. On management cluster 2, the control plane and worker nodes interact by using the external DNS API. The resources are deleted from management cluster 1, except for the S3 backup of etcd. If you try to set up the hosted cluster again on mangagement cluster 1, it will not work. 8.4.2. Backing up a hosted cluster To recover your hosted cluster in your target management cluster, you first need to back up all of the relevant data. Procedure Create a configmap file to declare the source management cluster by entering this command: USD oc create configmap mgmt-parent-cluster -n default --from-literal=from=USD{MGMT_CLUSTER_NAME} Shut down the reconciliation in the hosted cluster and in the node pools by entering these commands: USD PAUSED_UNTIL="true" USD oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator USD PAUSED_UNTIL="true" USD oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc patch -n USD{HC_CLUSTER_NS} nodepools/USD{NODEPOOLS} -p '{"spec":{"pausedUntil":"'USD{PAUSED_UNTIL}'"}}' --type=merge USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator Back up etcd and upload the data to an S3 bucket by running this bash script: Tip Wrap this script in a function and call it from the main function. # ETCD Backup ETCD_PODS="etcd-0" if [ "USD{CONTROL_PLANE_AVAILABILITY_POLICY}" = "HighlyAvailable" ]; then ETCD_PODS="etcd-0 etcd-1 etcd-2" fi for POD in USD{ETCD_PODS}; do # Create an etcd snapshot oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db FILEPATH="/USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db" CONTENT_TYPE="application/x-compressed-tar" DATE_VALUE=`date -R` SIGNATURE_STRING="PUT\n\nUSD{CONTENT_TYPE}\nUSD{DATE_VALUE}\nUSD{FILEPATH}" set +x ACCESS_KEY=USD(grep aws_access_key_id USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed "s/ //g") SECRET_KEY=USD(grep aws_secret_access_key USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed "s/ //g") SIGNATURE_HASH=USD(echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac "USD{SECRET_KEY}" -binary | base64) set -x # FIXME: this is pushing to the OIDC bucket oc exec -it etcd-0 -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- curl -X PUT -T "/var/lib/data/snapshot.db" \ -H "Host: USD{BUCKET_NAME}.s3.amazonaws.com" \ -H "Date: USD{DATE_VALUE}" \ -H "Content-Type: USD{CONTENT_TYPE}" \ -H "Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}" \ https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db done For more information about backing up etcd, see "Backing up and restoring etcd on a hosted cluster". Back up Kubernetes and OpenShift Container Platform objects by entering the following commands. You need to back up the following objects: HostedCluster and NodePool objects from the HostedCluster namespace HostedCluster secrets from the HostedCluster namespace HostedControlPlane from the Hosted Control Plane namespace Cluster from the Hosted Control Plane namespace AWSCluster , AWSMachineTemplate , and AWSMachine from the Hosted Control Plane namespace MachineDeployments , MachineSets , and Machines from the Hosted Control Plane namespace ControlPlane secrets from the Hosted Control Plane namespace USD mkdir -p USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS} USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD chmod 700 USD{BACKUP_DIR}/namespaces/ # HostedCluster USD echo "Backing Up HostedCluster Objects:" USD oc get hc USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml USD echo "--> HostedCluster" USD sed -i '' -e '/^status:USD/,USDd' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml # NodePool USD oc get np USD{NODEPOOLS} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml USD echo "--> NodePool" USD sed -i '' -e '/^status:USD/,USD d' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml # Secrets in the HC Namespace USD echo "--> HostedCluster Secrets:" for s in USD(oc get secret -n USD{HC_CLUSTER_NS} | grep "^USD{HC_CLUSTER_NAME}" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-USD{s}.yaml done # Secrets in the HC Control Plane Namespace USD echo "--> HostedCluster ControlPlane Secrets:" for s in USD(oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} | egrep -v "docker|service-account-token|oauth-openshift|NAME|token-USD{HC_CLUSTER_NAME}" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-USD{s}.yaml done # Hosted Control Plane USD echo "--> HostedControlPlane:" USD oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-USD{HC_CLUSTER_NAME}.yaml # Cluster USD echo "--> Cluster:" USD CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep USD{HC_CLUSTER_NAME}) USD oc get cluster USD{CL_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-USD{HC_CLUSTER_NAME}.yaml # AWS Cluster USD echo "--> AWS Cluster:" USD oc get awscluster USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-USD{HC_CLUSTER_NAME}.yaml # AWS MachineTemplate USD echo "--> AWS Machine Template:" USD oc get awsmachinetemplate USD{NODEPOOLS} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-USD{HC_CLUSTER_NAME}.yaml # AWS Machines USD echo "--> AWS Machine:" USD CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep USD{HC_CLUSTER_NAME}) for s in USD(oc get awsmachines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --no-headers | grep USD{CL_NAME} | cut -f1 -d\ ); do oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} awsmachines USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-USD{s}.yaml done # MachineDeployments USD echo "--> HostedCluster MachineDeployments:" for s in USD(oc get machinedeployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do mdp_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-USD{mdp_name}.yaml done # MachineSets USD echo "--> HostedCluster MachineSets:" for s in USD(oc get machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do ms_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-USD{ms_name}.yaml done # Machines USD echo "--> HostedCluster Machine:" for s in USD(oc get machine -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do m_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-USD{m_name}.yaml done Clean up the ControlPlane routes by entering this command: USD oc delete routes -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all By entering that command, you enable the ExternalDNS Operator to delete the Route53 entries. Verify that the Route53 entries are clean by running this script: function clean_routes() { if [[ -z "USD{1}" ]];then echo "Give me the NS where to clean the routes" exit 1 fi # Constants if [[ -z "USD{2}" ]];then echo "Give me the Route53 zone ID" exit 1 fi ZONE_ID=USD{2} ROUTES=10 timeout=40 count=0 # This allows us to remove the ownership in the AWS for the API route oc delete route -n USD{1} --all while [ USD{ROUTES} -gt 2 ] do echo "Waiting for ExternalDNS Operator to clean the DNS Records in AWS Route53 where the zone id is: USD{ZONE_ID}..." echo "Try: (USD{count}/USD{timeout})" sleep 10 if [[ USDcount -eq timeout ]];then echo "Timeout waiting for cleaning the Route53 DNS records" exit 1 fi count=USD((count+1)) ROUTES=USD(aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --max-items 10000 --output json | grep -c USD{EXTERNAL_DNS_DOMAIN}) done } # SAMPLE: clean_routes "<HC ControlPlane Namespace>" "<AWS_ZONE_ID>" clean_routes "USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}" "USD{AWS_ZONE_ID}" Verification Check all of the OpenShift Container Platform objects and the S3 bucket to verify that everything looks as expected. steps Restore your hosted cluster. 8.4.3. Restoring a hosted cluster Gather all of the objects that you backed up and restore them in your destination management cluster. Prerequisites You backed up the data from your source management cluster. Tip Ensure that the kubeconfig file of the destination management cluster is placed as it is set in the KUBECONFIG variable or, if you use the script, in the MGMT2_KUBECONFIG variable. Use export KUBECONFIG=<Kubeconfig FilePath> or, if you use the script, use export KUBECONFIG=USD{MGMT2_KUBECONFIG} . Procedure Verify that the new management cluster does not contain any namespaces from the cluster that you are restoring by entering these commands: # Just in case USD export KUBECONFIG=USD{MGMT2_KUBECONFIG} USD BACKUP_DIR=USD{HC_CLUSTER_DIR}/backup # Namespace deletion in the destination Management cluster USD oc delete ns USD{HC_CLUSTER_NS} || true USD oc delete ns USD{HC_CLUSTER_NS}-{HC_CLUSTER_NAME} || true Re-create the deleted namespaces by entering these commands: # Namespace creation USD oc new-project USD{HC_CLUSTER_NS} USD oc new-project USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} Restore the secrets in the HC namespace by entering this command: USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-* Restore the objects in the HostedCluster control plane namespace by entering these commands: # Secrets USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-* # Cluster USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-* If you are recovering the nodes and the node pool to reuse AWS instances, restore the objects in the HC control plane namespace by entering these commands: # AWS USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-* # Machines USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-* USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-* Restore the etcd data and the hosted cluster by running this bash script: ETCD_PODS="etcd-0" if [ "USD{CONTROL_PLANE_AVAILABILITY_POLICY}" = "HighlyAvailable" ]; then ETCD_PODS="etcd-0 etcd-1 etcd-2" fi HC_RESTORE_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-restore.yaml HC_BACKUP_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml HC_NEW_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-new.yaml cat USD{HC_BACKUP_FILE} > USD{HC_NEW_FILE} cat > USD{HC_RESTORE_FILE} <<EOF restoreSnapshotURL: EOF for POD in USD{ETCD_PODS}; do # Create a pre-signed URL for the etcd snapshot ETCD_SNAPSHOT="s3://USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db" ETCD_SNAPSHOT_URL=USD(AWS_DEFAULT_REGION=USD{MGMT2_REGION} aws s3 presign USD{ETCD_SNAPSHOT}) # FIXME no CLI support for restoreSnapshotURL yet cat >> USD{HC_RESTORE_FILE} <<EOF - "USD{ETCD_SNAPSHOT_URL}" EOF done cat USD{HC_RESTORE_FILE} if ! grep USD{HC_CLUSTER_NAME}-snapshot.db USD{HC_NEW_FILE}; then sed -i '' -e "/type: PersistentVolume/r USD{HC_RESTORE_FILE}" USD{HC_NEW_FILE} sed -i '' -e '/pausedUntil:/d' USD{HC_NEW_FILE} fi HC=USD(oc get hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} -o name || true) if [[ USD{HC} == "" ]];then echo "Deploying HC Cluster: USD{HC_CLUSTER_NAME} in USD{HC_CLUSTER_NS} namespace" oc apply -f USD{HC_NEW_FILE} else echo "HC Cluster USD{HC_CLUSTER_NAME} already exists, avoiding step" fi If you are recovering the nodes and the node pool to reuse AWS instances, restore the node pool by entering this command: USD oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-* Verification To verify that the nodes are fully restored, use this function: timeout=40 count=0 NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c "worker") || NODE_STATUS=0 while [ USD{NODE_POOL_REPLICAS} != USD{NODE_STATUS} ] do echo "Waiting for Nodes to be Ready in the destination MGMT Cluster: USD{MGMT2_CLUSTER_NAME}" echo "Try: (USD{count}/USD{timeout})" sleep 30 if [[ USDcount -eq timeout ]];then echo "Timeout waiting for Nodes in the destination MGMT Cluster" exit 1 fi count=USD((count+1)) NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c "worker") || NODE_STATUS=0 done steps Shut down and delete your cluster. 8.4.4. Deleting a hosted cluster from your source management cluster After you back up your hosted cluster and restore it to your destination management cluster, you shut down and delete the hosted cluster on your source management cluster. Prerequisites You backed up your data and restored it to your source management cluster. Tip Ensure that the kubeconfig file of the destination management cluster is placed as it is set in the KUBECONFIG variable or, if you use the script, in the MGMT_KUBECONFIG variable. Use export KUBECONFIG=<Kubeconfig FilePath> or, if you use the script, use export KUBECONFIG=USD{MGMT_KUBECONFIG} . Procedure Scale the deployment and statefulset objects by entering these commands: Important Do not scale the stateful set if the value of its spec.persistentVolumeClaimRetentionPolicy.whenScaled field is set to Delete , because this could lead to a loss of data. As a workaround, update the value of the spec.persistentVolumeClaimRetentionPolicy.whenScaled field to Retain . Ensure that no controllers exist that reconcile the stateful set and would return the value back to Delete , which could lead to a loss of data. # Just in case USD export KUBECONFIG=USD{MGMT_KUBECONFIG} # Scale down deployments USD oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all USD oc scale statefulset.apps -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all USD sleep 15 Delete the NodePool objects by entering these commands: NODEPOOLS=USD(oc get nodepools -n USD{HC_CLUSTER_NS} -o=jsonpath='{.items[?(@.spec.clusterName=="'USD{HC_CLUSTER_NAME}'")].metadata.name}') if [[ ! -z "USD{NODEPOOLS}" ]];then oc patch -n "USD{HC_CLUSTER_NS}" nodepool USD{NODEPOOLS} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' oc delete np -n USD{HC_CLUSTER_NS} USD{NODEPOOLS} fi Delete the machine and machineset objects by entering these commands: # Machines for m in USD(oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done USD oc delete machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all || true Delete the cluster object by entering these commands: # Cluster USD C_NAME=USD(oc get cluster -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) USD oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{C_NAME} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' USD oc delete cluster.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all Delete the AWS machines (Kubernetes objects) by entering these commands. Do not worry about deleting the real AWS machines. The cloud instances will not be affected. # AWS Machines for m in USD(oc get awsmachine.infrastructure.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done Delete the HostedControlPlane and ControlPlane HC namespace objects by entering these commands: # Delete HCP and ControlPlane HC NS USD oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} hostedcontrolplane.hypershift.openshift.io USD{HC_CLUSTER_NAME} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' USD oc delete hostedcontrolplane.hypershift.openshift.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all USD oc delete ns USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} || true Delete the HostedCluster and HC namespace objects by entering these commands: # Delete HC and HC Namespace USD oc -n USD{HC_CLUSTER_NS} patch hostedclusters USD{HC_CLUSTER_NAME} -p '{"metadata":{"finalizers":null}}' --type merge || true USD oc delete hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} || true USD oc delete ns USD{HC_CLUSTER_NS} || true Verification To verify that everything works, enter these commands: # Validations USD export KUBECONFIG=USD{MGMT2_KUBECONFIG} USD oc get hc -n USD{HC_CLUSTER_NS} USD oc get np -n USD{HC_CLUSTER_NS} USD oc get pod -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} # Inside the HostedCluster USD export KUBECONFIG=USD{HC_KUBECONFIG} USD oc get clusterversion USD oc get nodes steps Delete the OVN pods in the hosted cluster so that you can connect to the new OVN control plane that runs in the new management cluster: Load the KUBECONFIG environment variable with the hosted cluster's kubeconfig path. Enter this command: USD oc delete pod -n openshift-ovn-kubernetes --all | [
"oc rsh -n <hosted_control_plane_namespace> -c etcd <etcd_pod_name>",
"sh-4.4USD etcdctl endpoint health --cluster -w table",
"ENDPOINT HEALTH TOOK ERROR https://etcd-0.etcd-discovery.clusters-hosted.svc:2379 true 9.117698ms",
"oc get pods -l app=etcd -n <hosted_control_plane_namespace>",
"NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 64m etcd-1 2/2 Running 0 45m etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m",
"oc delete pvc/<etcd_pvc_name> pod/<etcd_pod_name> --wait=false",
"oc get pods -l app=etcd -n <hosted_control_plane_namespace>",
"NAME READY STATUS RESTARTS AGE etcd-0 2/2 Running 0 67m etcd-1 2/2 Running 0 48m etcd-2 2/2 Running 0 2m2s",
"CLUSTER_NAME=my-cluster",
"HOSTED_CLUSTER_NAMESPACE=clusters",
"CONTROL_PLANE_NAMESPACE=\"USD{HOSTED_CLUSTER_NAMESPACE}-USD{CLUSTER_NAME}\"",
"oc patch -n USD{HOSTED_CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/kube-apiserver --replicas=0",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-apiserver --replicas=0",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} deployment/openshift-oauth-apiserver --replicas=0",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd",
"ETCD_POD=etcd-0",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=https://localhost:2379 snapshot save /var/lib/snapshot.db",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} -c etcd -t USD{ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/snapshot.db",
"oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/snapshot.db /tmp/etcd.snapshot.db",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd",
"oc cp -c etcd USD{CONTROL_PLANE_NAMESPACE}/USD{ETCD_POD}:/var/lib/data/member/snap/db /tmp/etcd.snapshot.db",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0",
"oc delete -n USD{CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2",
"ETCD_IMAGE=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd -o jsonpath='{ .spec.template.spec.containers[0].image }')",
"cat << EOF | oc apply -n USD{CONTROL_PLANE_NAMESPACE} -f - apiVersion: apps/v1 kind: Deployment metadata: name: etcd-data spec: replicas: 1 selector: matchLabels: app: etcd-data template: metadata: labels: app: etcd-data spec: containers: - name: access image: USDETCD_IMAGE volumeMounts: - name: data mountPath: /var/lib command: - /usr/bin/bash args: - -c - |- while true; do sleep 1000 done volumes: - name: data persistentVolumeClaim: claimName: data-etcd-0 EOF",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data",
"DATA_POD=USD(oc get -n USD{CONTROL_PLANE_NAMESPACE} pods --no-headers -l app=etcd-data -o name | cut -d/ -f2)",
"oc cp /tmp/etcd.snapshot.db USD{CONTROL_PLANE_NAMESPACE}/USD{DATA_POD}:/var/lib/restored.snap.db",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm -rf /var/lib/data",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- mkdir -p /var/lib/data",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- etcdutl snapshot restore /var/lib/restored.snap.db --data-dir=/var/lib/data --skip-hash-check --name etcd-0 --initial-cluster-token=etcd-cluster --initial-cluster etcd-0=https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380 --initial-advertise-peer-urls https://etcd-0.etcd-discovery.USD{CONTROL_PLANE_NAMESPACE}.svc:2380",
"oc exec -n USD{CONTROL_PLANE_NAMESPACE} USD{DATA_POD} -- rm /var/lib/restored.snap.db",
"oc delete -n USD{CONTROL_PLANE_NAMESPACE} deployment/etcd-data",
"oc scale -n USD{CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3",
"oc get -n USD{CONTROL_PLANE_NAMESPACE} pods -l app=etcd -w",
"oc scale deployment -n USD{CONTROL_PLANE_NAMESPACE} --replicas=3 kube-apiserver openshift-apiserver openshift-oauth-apiserver",
"oc patch -n USD{CLUSTER_NAMESPACE} hostedclusters/USD{CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"\"}}' --type=merge",
"oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc scale deployment -n <hosted_cluster_namespace> --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver",
"oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/etcd-ca/ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db",
"oc exec -it <etcd_pod_name> -n <hosted_cluster_namespace> -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db",
"BUCKET_NAME=somebucket FILEPATH=\"/USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db\" CONTENT_TYPE=\"application/x-compressed-tar\" DATE_VALUE=`date -R` SIGNATURE_STRING=\"PUT\\n\\nUSD{CONTENT_TYPE}\\nUSD{DATE_VALUE}\\nUSD{FILEPATH}\" ACCESS_KEY=accesskey SECRET_KEY=secret SIGNATURE_HASH=`echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac USD{SECRET_KEY} -binary | base64` exec -it etcd-0 -n USD{HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T \"/var/lib/data/snapshot.db\" -H \"Host: USD{BUCKET_NAME}.s3.amazonaws.com\" -H \"Date: USD{DATE_VALUE}\" -H \"Content-Type: USD{CONTENT_TYPE}\" -H \"Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}\" https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{CLUSTER_NAME}-snapshot.db",
"oc get hostedcluster <hosted_cluster_name> -o=jsonpath='{.spec.secretEncryption.aescbc}' {\"activeKey\":{\"name\":\"<hosted_cluster_name>-etcd-encryption-key\"}}",
"oc get secret <hosted_cluster_name>-etcd-encryption-key -o=jsonpath='{.data.key}'",
"ETCD_SNAPSHOT=USD{ETCD_SNAPSHOT:-\"s3://USD{BUCKET_NAME}/USD{CLUSTER_NAME}-snapshot.db\"} ETCD_SNAPSHOT_URL=USD(aws s3 presign USD{ETCD_SNAPSHOT})",
"spec: etcd: managed: storage: persistentVolume: size: 4Gi type: PersistentVolume restoreSnapshotURL: - \"USD{ETCD_SNAPSHOT_URL}\" managementType: Managed",
"--external-dns-provider=aws --external-dns-credentials=<path_to_aws_credentials_file> --external-dns-domain-filter=<basedomain>",
"oc create configmap mgmt-parent-cluster -n default --from-literal=from=USD{MGMT_CLUSTER_NAME}",
"PAUSED_UNTIL=\"true\" oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator",
"PAUSED_UNTIL=\"true\" oc patch -n USD{HC_CLUSTER_NS} hostedclusters/USD{HC_CLUSTER_NAME} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc patch -n USD{HC_CLUSTER_NS} nodepools/USD{NODEPOOLS} -p '{\"spec\":{\"pausedUntil\":\"'USD{PAUSED_UNTIL}'\"}}' --type=merge oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator",
"ETCD Backup ETCD_PODS=\"etcd-0\" if [ \"USD{CONTROL_PLANE_AVAILABILITY_POLICY}\" = \"HighlyAvailable\" ]; then ETCD_PODS=\"etcd-0 etcd-1 etcd-2\" fi for POD in USD{ETCD_PODS}; do # Create an etcd snapshot oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db oc exec -it USD{POD} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db FILEPATH=\"/USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db\" CONTENT_TYPE=\"application/x-compressed-tar\" DATE_VALUE=`date -R` SIGNATURE_STRING=\"PUT\\n\\nUSD{CONTENT_TYPE}\\nUSD{DATE_VALUE}\\nUSD{FILEPATH}\" set +x ACCESS_KEY=USD(grep aws_access_key_id USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed \"s/ //g\") SECRET_KEY=USD(grep aws_secret_access_key USD{AWS_CREDS} | head -n1 | cut -d= -f2 | sed \"s/ //g\") SIGNATURE_HASH=USD(echo -en USD{SIGNATURE_STRING} | openssl sha1 -hmac \"USD{SECRET_KEY}\" -binary | base64) set -x # FIXME: this is pushing to the OIDC bucket oc exec -it etcd-0 -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -- curl -X PUT -T \"/var/lib/data/snapshot.db\" -H \"Host: USD{BUCKET_NAME}.s3.amazonaws.com\" -H \"Date: USD{DATE_VALUE}\" -H \"Content-Type: USD{CONTENT_TYPE}\" -H \"Authorization: AWS USD{ACCESS_KEY}:USD{SIGNATURE_HASH}\" https://USD{BUCKET_NAME}.s3.amazonaws.com/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db done",
"mkdir -p USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS} USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} chmod 700 USD{BACKUP_DIR}/namespaces/ HostedCluster echo \"Backing Up HostedCluster Objects:\" oc get hc USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml echo \"--> HostedCluster\" sed -i '' -e '/^status:USD/,USDd' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml NodePool oc get np USD{NODEPOOLS} -n USD{HC_CLUSTER_NS} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml echo \"--> NodePool\" sed -i '' -e '/^status:USD/,USD d' USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-USD{NODEPOOLS}.yaml Secrets in the HC Namespace echo \"--> HostedCluster Secrets:\" for s in USD(oc get secret -n USD{HC_CLUSTER_NS} | grep \"^USD{HC_CLUSTER_NAME}\" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-USD{s}.yaml done Secrets in the HC Control Plane Namespace echo \"--> HostedCluster ControlPlane Secrets:\" for s in USD(oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} | egrep -v \"docker|service-account-token|oauth-openshift|NAME|token-USD{HC_CLUSTER_NAME}\" | awk '{print USD1}'); do oc get secret -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-USD{s}.yaml done Hosted Control Plane echo \"--> HostedControlPlane:\" oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-USD{HC_CLUSTER_NAME}.yaml Cluster echo \"--> Cluster:\" CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\\*} | grep USD{HC_CLUSTER_NAME}) oc get cluster USD{CL_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-USD{HC_CLUSTER_NAME}.yaml AWS Cluster echo \"--> AWS Cluster:\" oc get awscluster USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-USD{HC_CLUSTER_NAME}.yaml AWS MachineTemplate echo \"--> AWS Machine Template:\" oc get awsmachinetemplate USD{NODEPOOLS} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-USD{HC_CLUSTER_NAME}.yaml AWS Machines echo \"--> AWS Machine:\" CL_NAME=USD(oc get hcp USD{HC_CLUSTER_NAME} -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\\*} | grep USD{HC_CLUSTER_NAME}) for s in USD(oc get awsmachines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --no-headers | grep USD{CL_NAME} | cut -f1 -d\\ ); do oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} awsmachines USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-USD{s}.yaml done MachineDeployments echo \"--> HostedCluster MachineDeployments:\" for s in USD(oc get machinedeployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do mdp_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-USD{mdp_name}.yaml done MachineSets echo \"--> HostedCluster MachineSets:\" for s in USD(oc get machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do ms_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-USD{ms_name}.yaml done Machines echo \"--> HostedCluster Machine:\" for s in USD(oc get machine -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do m_name=USD(echo USD{s} | cut -f 2 -d /) oc get -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USDs -o yaml > USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-USD{m_name}.yaml done",
"oc delete routes -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all",
"function clean_routes() { if [[ -z \"USD{1}\" ]];then echo \"Give me the NS where to clean the routes\" exit 1 fi # Constants if [[ -z \"USD{2}\" ]];then echo \"Give me the Route53 zone ID\" exit 1 fi ZONE_ID=USD{2} ROUTES=10 timeout=40 count=0 # This allows us to remove the ownership in the AWS for the API route oc delete route -n USD{1} --all while [ USD{ROUTES} -gt 2 ] do echo \"Waiting for ExternalDNS Operator to clean the DNS Records in AWS Route53 where the zone id is: USD{ZONE_ID}...\" echo \"Try: (USD{count}/USD{timeout})\" sleep 10 if [[ USDcount -eq timeout ]];then echo \"Timeout waiting for cleaning the Route53 DNS records\" exit 1 fi count=USD((count+1)) ROUTES=USD(aws route53 list-resource-record-sets --hosted-zone-id USD{ZONE_ID} --max-items 10000 --output json | grep -c USD{EXTERNAL_DNS_DOMAIN}) done } SAMPLE: clean_routes \"<HC ControlPlane Namespace>\" \"<AWS_ZONE_ID>\" clean_routes \"USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}\" \"USD{AWS_ZONE_ID}\"",
"Just in case export KUBECONFIG=USD{MGMT2_KUBECONFIG} BACKUP_DIR=USD{HC_CLUSTER_DIR}/backup Namespace deletion in the destination Management cluster oc delete ns USD{HC_CLUSTER_NS} || true oc delete ns USD{HC_CLUSTER_NS}-{HC_CLUSTER_NAME} || true",
"Namespace creation oc new-project USD{HC_CLUSTER_NS} oc new-project USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}",
"oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/secret-*",
"Secrets oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/secret-* Cluster oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/hcp-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/cl-*",
"AWS oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awscl-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsmt-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/awsm-* Machines oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machinedeployment-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machineset-* oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME}/machine-*",
"ETCD_PODS=\"etcd-0\" if [ \"USD{CONTROL_PLANE_AVAILABILITY_POLICY}\" = \"HighlyAvailable\" ]; then ETCD_PODS=\"etcd-0 etcd-1 etcd-2\" fi HC_RESTORE_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-restore.yaml HC_BACKUP_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}.yaml HC_NEW_FILE=USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/hc-USD{HC_CLUSTER_NAME}-new.yaml cat USD{HC_BACKUP_FILE} > USD{HC_NEW_FILE} cat > USD{HC_RESTORE_FILE} <<EOF restoreSnapshotURL: EOF for POD in USD{ETCD_PODS}; do # Create a pre-signed URL for the etcd snapshot ETCD_SNAPSHOT=\"s3://USD{BUCKET_NAME}/USD{HC_CLUSTER_NAME}-USD{POD}-snapshot.db\" ETCD_SNAPSHOT_URL=USD(AWS_DEFAULT_REGION=USD{MGMT2_REGION} aws s3 presign USD{ETCD_SNAPSHOT}) # FIXME no CLI support for restoreSnapshotURL yet cat >> USD{HC_RESTORE_FILE} <<EOF - \"USD{ETCD_SNAPSHOT_URL}\" EOF done cat USD{HC_RESTORE_FILE} if ! grep USD{HC_CLUSTER_NAME}-snapshot.db USD{HC_NEW_FILE}; then sed -i '' -e \"/type: PersistentVolume/r USD{HC_RESTORE_FILE}\" USD{HC_NEW_FILE} sed -i '' -e '/pausedUntil:/d' USD{HC_NEW_FILE} fi HC=USD(oc get hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} -o name || true) if [[ USD{HC} == \"\" ]];then echo \"Deploying HC Cluster: USD{HC_CLUSTER_NAME} in USD{HC_CLUSTER_NS} namespace\" oc apply -f USD{HC_NEW_FILE} else echo \"HC Cluster USD{HC_CLUSTER_NAME} already exists, avoiding step\" fi",
"oc apply -f USD{BACKUP_DIR}/namespaces/USD{HC_CLUSTER_NS}/np-*",
"timeout=40 count=0 NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c \"worker\") || NODE_STATUS=0 while [ USD{NODE_POOL_REPLICAS} != USD{NODE_STATUS} ] do echo \"Waiting for Nodes to be Ready in the destination MGMT Cluster: USD{MGMT2_CLUSTER_NAME}\" echo \"Try: (USD{count}/USD{timeout})\" sleep 30 if [[ USDcount -eq timeout ]];then echo \"Timeout waiting for Nodes in the destination MGMT Cluster\" exit 1 fi count=USD((count+1)) NODE_STATUS=USD(oc get nodes --kubeconfig=USD{HC_KUBECONFIG} | grep -v NotReady | grep -c \"worker\") || NODE_STATUS=0 done",
"Just in case export KUBECONFIG=USD{MGMT_KUBECONFIG} Scale down deployments oc scale deployment -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all oc scale statefulset.apps -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --replicas=0 --all sleep 15",
"NODEPOOLS=USD(oc get nodepools -n USD{HC_CLUSTER_NS} -o=jsonpath='{.items[?(@.spec.clusterName==\"'USD{HC_CLUSTER_NAME}'\")].metadata.name}') if [[ ! -z \"USD{NODEPOOLS}\" ]];then oc patch -n \"USD{HC_CLUSTER_NS}\" nodepool USD{NODEPOOLS} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete np -n USD{HC_CLUSTER_NS} USD{NODEPOOLS} fi",
"Machines for m in USD(oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name); do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done oc delete machineset -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all || true",
"Cluster C_NAME=USD(oc get cluster -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{C_NAME} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete cluster.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all",
"AWS Machines for m in USD(oc get awsmachine.infrastructure.cluster.x-k8s.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} -o name) do oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' || true oc delete -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} USD{m} || true done",
"Delete HCP and ControlPlane HC NS oc patch -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} hostedcontrolplane.hypershift.openshift.io USD{HC_CLUSTER_NAME} --type=json --patch='[ { \"op\":\"remove\", \"path\": \"/metadata/finalizers\" }]' oc delete hostedcontrolplane.hypershift.openshift.io -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} --all oc delete ns USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} || true",
"Delete HC and HC Namespace oc -n USD{HC_CLUSTER_NS} patch hostedclusters USD{HC_CLUSTER_NAME} -p '{\"metadata\":{\"finalizers\":null}}' --type merge || true oc delete hc -n USD{HC_CLUSTER_NS} USD{HC_CLUSTER_NAME} || true oc delete ns USD{HC_CLUSTER_NS} || true",
"Validations export KUBECONFIG=USD{MGMT2_KUBECONFIG} oc get hc -n USD{HC_CLUSTER_NS} oc get np -n USD{HC_CLUSTER_NS} oc get pod -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} oc get machines -n USD{HC_CLUSTER_NS}-USD{HC_CLUSTER_NAME} Inside the HostedCluster export KUBECONFIG=USD{HC_KUBECONFIG} oc get clusterversion oc get nodes",
"oc delete pod -n openshift-ovn-kubernetes --all"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/hosted_control_planes/high-availability-for-hosted-control-planes |
Managing hosts | Managing hosts Red Hat Satellite 6.16 Register hosts to Satellite, configure host groups and collections, set up remote execution, manage packages on hosts, monitor hosts, and more Red Hat Satellite Documentation Team [email protected] | [
"hammer host create --ask-root-password yes --hostgroup \" My_Host_Group \" --interface=\"primary=true, provision=true, mac= My_MAC_Address , ip= My_IP_Address \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \"",
"subscription-manager syspurpose set usage ' Production ' subscription-manager syspurpose set role ' Red Hat Enterprise Linux Server ' subscription-manager syspurpose add addons ' your_addon '",
"subscription-manager syspurpose",
"hammer host delete --id My_Host_ID --location-id My_Location_ID --organization-id My_Organization_ID",
"mkdir /etc/puppetlabs/code/environments/ example_environment",
"hammer hostgroup create --name \"Base\" --architecture \"My_Architecture\" --content-source-id _My_Content_Source_ID_ --content-view \"_My_Content_View_\" --domain \"_My_Domain_\" --lifecycle-environment \"_My_Lifecycle_Environment_\" --locations \"_My_Location_\" --medium-id _My_Installation_Medium_ID_ --operatingsystem \"_My_Operating_System_\" --organizations \"_My_Organization_\" --partition-table \"_My_Partition_Table_\" --puppet-ca-proxy-id _My_Puppet_CA_Proxy_ID_ --puppet-environment \"_My_Puppet_Environment_\" --puppet-proxy-id _My_Puppet_Proxy_ID_ --root-pass \"My_Password\" --subnet \"_My_Subnet_\"",
"MAJOR=\" My_Major_Operating_System_Version \" ARCH=\" My_Architecture \" ORG=\" My_Organization \" LOCATIONS=\" My_Location \" PTABLE_NAME=\" My_Partition_Table \" DOMAIN=\" My_Domain \" hammer --output csv --no-headers lifecycle-environment list --organization \"USD{ORG}\" | cut -d ',' -f 2 | while read LC_ENV; do [[ USD{LC_ENV} == \"Library\" ]] && continue hammer hostgroup create --name \"rhel-USD{MAJOR}server-USD{ARCH}-USD{LC_ENV}\" --architecture \"USD{ARCH}\" --partition-table \"USD{PTABLE_NAME}\" --domain \"USD{DOMAIN}\" --organizations \"USD{ORG}\" --query-organization \"USD{ORG}\" --locations \"USD{LOCATIONS}\" --lifecycle-environment \"USD{LC_ENV}\" done",
"systemctl enable --now chronyd",
"chkconfig --add ntpd chkconfig ntpd on service ntpd start",
"cp My_SSL_CA_file .pem /etc/pki/ca-trust/source/anchors",
"update-ca-trust",
"mkdir /etc/puppetlabs/code/environments/ example_environment",
"curl -O http:// satellite.example.com /pub/bootstrap.py",
"chmod +x bootstrap.py",
"/usr/libexec/platform-python bootstrap.py -h",
"./bootstrap.py -h",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \"",
"./bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \"",
"rm bootstrap.py",
"ROLE='Bootstrap' hammer role create --name \"USDROLE\" hammer filter create --role \"USDROLE\" --permissions view_organizations hammer filter create --role \"USDROLE\" --permissions view_locations hammer filter create --role \"USDROLE\" --permissions view_domains hammer filter create --role \"USDROLE\" --permissions view_hostgroups hammer filter create --role \"USDROLE\" --permissions view_hosts hammer filter create --role \"USDROLE\" --permissions view_architectures hammer filter create --role \"USDROLE\" --permissions view_ptables hammer filter create --role \"USDROLE\" --permissions view_operatingsystems hammer filter create --role \"USDROLE\" --permissions create_hosts",
"hammer user add-role --id user_id --role Bootstrap",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --force",
"bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --force",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --legacy-purge --legacy-login rhn-user",
"bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --legacy-purge --legacy-login rhn-user",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --skip-puppet",
"bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --skip-puppet",
"/usr/libexec/platform-python bootstrap.py --server satellite.example.com --organization=\" My_Organization \" --activationkey=\" My_Activation_Key \" --skip-foreman",
"bootstrap.py --server satellite.example.com --organization=\" My_Organization \" --activationkey=\" My_Activation_Key \" --skip-foreman",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --download-method https",
"bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --download-method https",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --ip 192.x.x.x",
"bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --ip 192.x.x.x",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --rex --rex-user root",
"bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --rex --rex-user root",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --add-domain",
"bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --add-domain",
"hammer settings set --name create_new_host_when_facts_are_uploaded --value false hammer settings set --name create_new_host_when_report_is_uploaded --value false",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --fqdn node100.example.com",
"bootstrap.py --login= admin --server satellite.example.com --location=\" My_Location \" --organization=\" My_Organization \" --hostgroup=\" My_Host_Group \" --activationkey=\" My_Activation_Key \" --fqdn node100.example.com",
"yum install katello-host-tools-tracer",
"katello-tracer-upload",
"dnf install puppet-agent",
"yum install puppet-agent",
". /etc/profile.d/puppet-agent.sh",
"puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent",
"puppet resource service puppet ensure=running enable=true",
"puppet ssl bootstrap",
"puppet ssl bootstrap",
"dnf install http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"hammer host create --ask-root-password yes --hostgroup My_Host_Group --ip= My_IP_Address --mac= My_MAC_Address --managed true --interface=\"identifier= My_NIC_1, mac=_My_MAC_Address_1 , managed=true, type=Nic::Managed, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --interface=\"identifier= My_NIC_2 , mac= My_MAC_Address_2 , managed=true, type=Nic::Managed, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --interface=\"identifier= bondN , ip= My_IP_Address_2 , type=Nic::Bond, mode=active-backup, attached_devices=[ My_NIC_1 , My_NIC_2 ], managed=true, domain_id= My_Domain_ID , subnet_id= My_Subnet_ID \" --location \" My_Location \" --name \" My_Host_Name \" --organization \" My_Organization \" --subnet-id= My_Subnet_ID",
"satellite-installer --foreman-proxy-bmc-default-provider=ipmitool --foreman-proxy-bmc=true",
"https:// satellite.example.com /unattended/public/foreman_ca_refresh",
"curl --head https:// satellite.example.com",
"curl --head https:// capsule.example.com:9090 /features",
"https:// satellite.example.com /unattended/public/foreman_ca_refresh",
"curl --head https:// satellite.example.com",
"curl --head https:// capsule.example.com:9090 /features",
"curl -o \"satellite_ca_cert.crt\" https:// satellite.example.com /unattended/public/foreman_raw_ca",
"cp -u satellite_ca_cert.crt /etc/rhsm/ca/katello-server-ca.pem",
"cp satellite_ca_cert.crt /etc/pki/ca-trust/source/anchors",
"update-ca-trust",
"curl --head https:// satellite.example.com",
"curl --head https:// capsule.example.com:9090 /features",
"satellite-installer --enable-foreman-plugin-remote-execution-cockpit --reset-foreman-plugin-remote-execution-cockpit-ensure",
"satellite-installer --foreman-plugin-remote-execution-cockpit-ensure absent",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-cockpit-integration false",
"hammer report-template list",
"hammer report-template generate --id My_Template_ID",
"hammer report-template generate --inputs \"Days from Now=no limit\" --name \"Subscription - General Report\"",
"hammer report-template generate --inputs \"Days from Now=60\" --name \"Subscription - General Report\"",
"hammer report-template list",
"hammer report-template dump --id My_Template_ID > example_export .erb",
"curl --insecure --user My_User_Name : My_Password --request GET --config https:// satellite.example.com /api/report_templates | json_reformat",
"{ \"total\": 6, \"subtotal\": 6, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": [ { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applicable errata\", \"id\": 112 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Applied Errata\", \"id\": 113 }, { \"created_at\": \"2019-11-30 16:15:24 UTC\", \"updated_at\": \"2019-11-30 16:15:24 UTC\", \"name\": \"Hosts - complete list\", \"id\": 158 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Host statuses\", \"id\": 114 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Registered hosts\", \"id\": 115 }, { \"created_at\": \"2019-11-20 17:49:52 UTC\", \"updated_at\": \"2019-11-20 17:49:52 UTC\", \"name\": \"Subscriptions\", \"id\": 116 } ] }",
"curl --insecure --output /tmp/_Example_Export_Template .erb_ --user admin:password --request GET --config https:// satellite.example.com /api/report_templates/ My_Template_ID /export",
"cat Example_Template .json { \"name\": \" Example Template Name \", \"template\": \" Enter ERB Code Here \" }",
"{ \"name\": \"Hosts - complete list\", \"template\": \" <%# name: Hosts - complete list snippet: false template_inputs: - name: host required: false input_type: user advanced: false value_type: plain resource_type: Katello::ActivationKey model: ReportTemplate -%> <% load_hosts(search: input('host')).each_record do |host| -%> <% report_row( 'Server FQDN': host.name ) -%> <% end -%> <%= report_render %> \" }",
"curl --insecure --user My_User_Name : My_Password --data @ Example_Template .json --header \"Content-Type:application/json\" --request POST --config https:// satellite.example.com /api/report_templates/import",
"curl --insecure --user My_User_Name : My_Password --request GET --config https:// satellite.example.com /api/report_templates | json_reformat",
"hammer host-collection create --name \" My_Host_Collection \" --organization \" My_Organization \"",
"hammer host-collection add-host --host-ids My_Host_ID_1 --id My_Host_Collection_ID",
"hammer host-collection add-host --host-ids My_Host_ID_1 , My_Host_ID_2 --id My_Host_Collection_ID",
"subscription-manager refresh",
"name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh",
"dnf install katello-pull-transport-migrate",
"yum install katello-pull-transport-migrate",
"systemctl status yggdrasild",
"hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH",
"curl --header 'Content-Type: application/json' --request GET https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_Capsule_ID",
"curl --data '{ \"playbook_names\": [\" My_Playbook_Name \"] }' --header 'Content-Type: application/json' --request PUT https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_Capsule_ID",
"curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_Capsule_ID",
"hammer settings set --name=remote_execution_fallback_proxy --value=true",
"hammer settings set --name=remote_execution_global_proxy --value=true",
"mkdir /My_Remote_Working_Directory",
"chcon --reference=/var/tmp /My_Remote_Working_Directory",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-remote-working-dir /My_Remote_Working_Directory",
"mkdir /My_Remote_Working_Directory",
"systemctl edit yggdrasild",
"Environment=FOREMAN_YGG_WORKER_WORKDIR= /My_Remote_Working_Directory",
"systemctl restart yggdrasild",
"ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]",
"ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]",
"ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy",
"mkdir ~/.ssh",
"curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys",
"chmod 700 ~/.ssh",
"chmod 600 ~/.ssh/authorized_keys",
"<%= snippet 'remote_execution_ssh_keys' %>",
"id -u foreman-proxy",
"umask 077",
"mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"",
"cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab",
"chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"",
"chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"",
"restorecon -RvF /var/kerberos/krb5",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true",
"hostgroup_fullname ~ \" My_Host_Group *\"",
"hammer settings set --name=remote_execution_global_proxy --value=false",
"hammer job-template list",
"hammer job-template info --id My_Template_ID",
"hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200",
"global_status = ok",
"global_status = error or global_status = warning",
"status.pending > 0",
"status.restarted > 0",
"status.interesting = true",
"curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool",
"{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_install\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }",
"curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool",
"{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_update\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }",
"curl https:// satellite.example.com /api/job_invocations -H \"content-type: application/json\" -X POST -d @ Path_To_My_API_Request_Body -u My_Username : My_Password | python3 -m json.tool",
"{ \"job_invocation\" : { \"concurrency_control\" : { \"concurrency_level\" : 100 }, \"feature\" : \"katello_package_remove\", \"inputs\" : { \"package\" : \"nano vim\" }, \"scheduling\" : { \"start_at\" : \"2023-09-21T19:00:00+00:00\", \"start_before\" : \"2023-09-23T00:00:00+00:00\" }, \"search_query\" : \"*\", \"ssh\" : { \"effective_user\" : \"My_Username\", \"effective_user_password\" : \"My_Password\" }, \"targeting_type\" : \"dynamic_query\" } }",
"rpm --query yggdrasil",
"systemctl status yggdrasil com.redhat.Yggdrasil1.Worker1.foreman",
"dnf install foreman_ygg_migration",
"systemctl status yggdrasil com.redhat.Yggdrasil1.Worker1.foreman",
"<% if @host.operatingsystem.family == \"Redhat\" && @host.operatingsystem.major.to_i > 6 -%> systemctl <%= input(\"action\") %> <%= input(\"service\") %> <% else -%> service <%= input(\"service\") %> <%= input(\"action\") %> <% end -%>",
"echo <%= @host.name %>",
"host.example.com",
"<% server_name = @host.fqdn %> <%= server_name %>",
"host.example.com",
"<%= @ example_incorrect_variable .fqdn -%>",
"undefined method `fqdn' for nil:NilClass",
"<%= \"line1\" %> <%= \"line2\" %>",
"line1 line2",
"<%= \"line1\" -%> <%= \"line2\" %>",
"line1line2",
"<%= @host.fqdn -%> <%= @host.ip -%>",
"host.example.com10.10.181.216",
"<%# A comment %>",
"<%- load_hosts.each do |host| -%> <%- if host.build? %> <%= host.name %> build is in progress <%- end %> <%- end %>",
"<%= input('cpus') %>",
"<%- load_hosts().each_record do |host| -%> <%= host.name %>",
"<% load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%= host.name %> <% end -%>",
"<%- load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name ) -%> <%- end -%> <%= report_render -%>",
"Server FQDN host1.example.com host2.example.com host3.example.com host4.example.com host5.example.com host6.example.com",
"<%- load_hosts(search: input('host')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name, 'IP': host.ip ) -%> <%- end -%> <%= report_render -%>",
"Server FQDN,IP host1.example.com , 10.8.30.228 host2.example.com , 10.8.30.227 host3.example.com , 10.8.30.226 host4.example.com , 10.8.30.225 host5.example.com , 10.8.30.224 host6.example.com , 10.8.30.223",
"<%= report_render -%>",
"truthy?(\"true\") => true truthy?(1) => true truthy?(\"false\") => false truthy?(0) => false",
"falsy?(\"true\") => false falsy?(1) => false falsy?(\"false\") => true falsy?(0) => true",
"<% @host.ip.split('.').last %>",
"<% load_hosts().each_record do |host| -%> <% if @host.name == \" host1.example.com \" -%> <% result=\"positive\" -%> <% else -%> <% result=\"negative\" -%> <% end -%> <%= result -%>",
"host1.example.com positive",
"<%= @host.interfaces -%>",
"<Nic::Base::ActiveRecord_Associations_CollectionProxy:0x00007f734036fbe0>",
"[] each find_in_batches first map size to_a",
"alias? attached_devices attached_devices_identifiers attached_to bond_options children_mac_addresses domain fqdn identifier inheriting_mac ip ip6 link mac managed? mode mtu nic_delay physical? primary provision shortname subnet subnet6 tag virtual? vlanid",
"<% load_hosts().each_record do |host| -%> <% host.interfaces.each do |iface| -%> iface.alias?: <%= iface.alias? %> iface.attached_to: <%= iface.attached_to %> iface.bond_options: <%= iface.bond_options %> iface.children_mac_addresses: <%= iface.children_mac_addresses %> iface.domain: <%= iface.domain %> iface.fqdn: <%= iface.fqdn %> iface.identifier: <%= iface.identifier %> iface.inheriting_mac: <%= iface.inheriting_mac %> iface.ip: <%= iface.ip %> iface.ip6: <%= iface.ip6 %> iface.link: <%= iface.link %> iface.mac: <%= iface.mac %> iface.managed?: <%= iface.managed? %> iface.mode: <%= iface.mode %> iface.mtu: <%= iface.mtu %> iface.physical?: <%= iface.physical? %> iface.primary: <%= iface.primary %> iface.provision: <%= iface.provision %> iface.shortname: <%= iface.shortname %> iface.subnet: <%= iface.subnet %> iface.subnet6: <%= iface.subnet6 %> iface.tag: <%= iface.tag %> iface.virtual?: <%= iface.virtual? %> iface.vlanid: <%= iface.vlanid %> <%- end -%>",
"host1.example.com iface.alias?: false iface.attached_to: iface.bond_options: iface.children_mac_addresses: [] iface.domain: iface.fqdn: host1.example.com iface.identifier: ens192 iface.inheriting_mac: 00:50:56:8d:4c:cf iface.ip: 10.10.181.13 iface.ip6: iface.link: true iface.mac: 00:50:56:8d:4c:cf iface.managed?: true iface.mode: balance-rr iface.mtu: iface.physical?: true iface.primary: true iface.provision: true iface.shortname: host1.example.com iface.subnet: iface.subnet6: iface.tag: iface.virtual?: false iface.vlanid:",
"<% pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = pm_set || host_param_true?('force-puppet') puppetlabs_enabled = host_param_true?('enable-puppetlabs-repo') %>",
"<% os_major = @host.operatingsystem.major.to_i os_minor = @host.operatingsystem.minor.to_i %> <% if ((os_minor < 2) && (os_major < 14)) -%> <% end -%>",
"<%= indent 4 do snippet 'subscription_manager_registration' end %>",
"<% subnet = @host.subnet %> <% if subnet.respond_to?(:dhcp_boot_mode?) -%> <%= snippet 'kickstart_networking_setup' %> <% end -%>",
"'Serial': host.facts['dmi::system::serial_number'], 'Encrypted': host.facts['luks_stat'],",
"<%- report_row( 'Host': host.name, 'Operating System': host.operatingsystem, 'Kernel': host.facts['uname::release'], 'Environment': host.single_lifecycle_environment ? host.single_lifecycle_environment.name : nil, 'Erratum': erratum.errata_id, 'Type': erratum.errata_type, 'Published': erratum.issued, 'Applicable since': erratum.created_at, 'Severity': erratum.severity, 'Packages': erratum.package_names, 'CVEs': erratum.cves, 'Reboot suggested': erratum.reboot_suggested, ) -%>",
"<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %>",
"<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>",
"restorecon -RvF <%= input(\"directory\") %>",
"<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>",
"<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/managing_hosts/index |
Chapter 3. Event templates | Chapter 3. Event templates Cryostat includes default event templates that you can use to quickly create a JFR recording for monitoring your target JVM's performance. 3.1. Using custom event templates You can choose either one of the following default event templates when creating a JDK Flight Recorder (JFR) recording: Continuous template, which collects basic target Java Virtual Machine (JVM) data for either a fixed duration or until it is explicitly stopped. Profiling template, which collects in-depth target JVM data for either a fixed duration or until it is explicitly stopped. By using either of these default event templates, you can quickly create a JFR recording for monitoring your target JVM's performance. You can edit either event template at a later stage to suit your needs. For example, the default event templates do not contain application-specific custom events, so you must add these custom events to the custom template. Cryostat also supports the ALL meta-template, which enables a JFR to monitor all event types for a target JVM. Default values exist for each event type. The ALL meta-template does not contain an XML definition, so you cannot download an XML file for the ALL meta-template. Prerequisites Installed Cryostat 3.0 on Red Hat OpenShift by using the Installed Operators option. Created a Cryostat instance in your Red Hat OpenShift project. Procedure On the Dashboard panel for your Cryostat instance, select a Target JVM from the drop-down list. Optional: On the Topology panel, you can define a target JVM by selecting the Add to view icon. After you select the icon, a window opens for defining a custom target connection URL. In the Connection URL field, enter the URL for your JVM's Java Management Extension (JMX) endpoint. Optional: In the Alias field, enter an alias for your JMX Service URL. Click Create . Figure 3.1. Create Target dialog box From the navigation menu on the Cryostat web console, click Events . An Authentication Required dialog might open on your web console. If prompted, enter your Username and Password in the Authentication Required dialog box, and click Save to provide your credentials to the target JVM. Note If the selected target JMX has SSL certification enabled for JMX connections, you must add its certificate when prompted. Cryostat can encrypt and store credentials for a target JVM application in a database that is stored on a persistent volume claim (PVC) on Red Hat OpenShift. Under the Event Templates tab, locate your listed event template and then select its more options menu. From the more options menu, click Download . Depending on how you configured your operating system, a file-save dialog opens. Save the file to your preferred location. Figure 3.2. Example of an event template's more options menu Open the file with your default file editor and edit the file to meet your needs. You must save your file to retain your configuration changes. Note You can add values to the description and provider attributes that can help with identifying your file at a later stage. From the Events menu, go to the Event Templates tab then click the Upload icon. A Create Custom Event Template window opens in your Cryostat web console. Figure 3.3. Create Custom Event Template window Click Upload and use your default file editor to upload one or more configured event template files to the Cryostat web console. You can also drag and drop the files into the Template XML window. Click the Submit button. The Event Templates tab opens on your Cryostat web console, where you can now view your custom event template. Optional: After you create your event template, you can choose one of the following options for using your template to create a JFR recording: From the Automated Rules menu, click Create and then select an event template from the Template list. From the Events menu, locate your listed event template, then from the more options menu, select Create Recording . From the Recordings menu, under the Active Recordings tab, click Create . Additional resources See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat) See Uploading an SSL certificate (Using Cryostat to manage a JFR recording) See Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording) See Enabling or disabling automated rules (Using automated rules on Cryostat) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_cryostat_to_manage_a_jfr_recording/assembly_event-templates_assembly_archive-jfr-recordings |
Chapter 3. Supported configurations | Chapter 3. Supported configurations 3.1. Supported Migration Toolkit for Runtimes migration paths The Migration Toolkit for Runtimes (MTR) supports the following migrations: Migrating from third-party enterprise application servers, such as Oracle WebLogic Server, to JBoss Enterprise Application Platform (JBoss EAP). Upgrading to the latest release of JBoss EAP. Migrating from a Windows-only .NET 4.5+ Framework to cross-platform .NET 8.0. (Developer Preview) MTR provides a comprehensive set of rules to assess the suitability of your applications for containerization and deployment on Red Hat OpenShift Container Platform (RHOCP). You can run an MTR analysis to assess your applications' suitability for migration to multiple target platforms. Table 3.1. Supported Java migration paths: Source platform ⇒ Target platform Source platform ⇒ Migration to JBoss EAP 7 & 8 OpenShift (cloud readiness) OpenJDK 11, 17, and 21 Jakarta EE 9 Camel 3 & 4 Spring Boot in Red Hat Runtimes Quarkus Open Liberty Oracle WebLogic Server ✔ ✔ ✔ - - - - - IBM WebSphere Application Server ✔ ✔ ✔ - - - - ✔ JBoss EAP 4 ✘ [a] ✔ ✔ - - - - - JBoss EAP 5 ✔ ✔ ✔ - - - - - JBoss EAP 6 ✔ ✔ ✔ - - - - - JBoss EAP 7 ✔ ✔ ✔ - - - ✔ - Thorntail ✔ [b] - - - - - - - Oracle JDK - ✔ ✔ - - - - - Camel 2 - ✔ ✔ - ✔ - - - Spring Boot - ✔ ✔ ✔ - ✔ ✔ - Any Java application - ✔ ✔ - - - - - Any Java EE application - - - ✔ - - - - [a] Although MTR does not currently provide rules for this migration path, Red Hat Consulting can assist with migration from any source platform to JBoss EAP 7. [b] Requires JBoss Enterprise Application Platform expansion pack 2 (EAP XP 2) .NET migration paths: Source platform ⇒ Target platform (Developer Preview) Source platform ⇒ OpenShift (cloud readiness) Migration to .NET 8.0 .NET Framework 4.5+ (Windows only) ✔ ✔ | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/introduction_to_the_migration_toolkit_for_runtimes/supported_configurations |
Red Hat Ansible Automation Platform hardening guide | Red Hat Ansible Automation Platform hardening guide Red Hat Ansible Automation Platform 2.4 Install, configure, and maintain Ansible Automation Platform running on Red Hat Enterprise Linux in a secure manner. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_hardening_guide/index |
function::target | function::target Name function::target - Return the process ID of the target process. Synopsis Arguments None General Syntax target: long Description This function returns the process ID of the target process. This is useful in conjunction with the -x PID or -c CMD command-line options to stap. An example of its use is to create scripts that filter on a specific process. | [
"function target:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-target |
Chapter 3. Authentication and authorization for hosted control planes | Chapter 3. Authentication and authorization for hosted control planes The OpenShift Container Platform control plane includes a built-in OAuth server. You can obtain OAuth access tokens to authenticate to the OpenShift Container Platform API. After you create your hosted cluster, you can configure OAuth by specifying an identity provider. 3.1. Configuring the OAuth server for a hosted cluster by using the CLI You can configure the internal OAuth server for your hosted cluster by using an OpenID Connect identity provider ( oidc ). You can configure OAuth for the following supported identity providers: oidc htpasswd keystone ldap basic-authentication request-header github gitlab google Adding any identity provider in the OAuth configuration removes the default kubeadmin user provider. Note When you configure identity providers, you must configure at least one NodePool replica in your hosted cluster in advance. Traffic for DNS resolution is sent through the worker nodes. You do not need to configure the NodePool replicas in advance for the htpasswd and request-header identity providers. Prerequisites You created your hosted cluster. Procedure Edit the HostedCluster custom resource (CR) on the hosting cluster by running the following command: USD oc edit <hosted_cluster_name> -n <hosted_cluster_namespace> Add the OAuth configuration in the HostedCluster CR by using the following example: apiVersion: hypershift.openshift.io/v1alpha1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: oauth: identityProviders: - openID: 3 claims: email: 4 - <email_address> name: 5 - <display_name> preferredUsername: 6 - <preferred_username> clientID: <client_id> 7 clientSecret: name: <client_id_secret_name> 8 issuer: https://example.com/identity 9 mappingMethod: lookup 10 name: IAM type: OpenID 1 Specifies your hosted cluster name. 2 Specifies your hosted cluster namespace. 3 This provider name is prefixed to the value of the identity claim to form an identity name. The provider name is also used to build the redirect URL. 4 Defines a list of attributes to use as the email address. 5 Defines a list of attributes to use as a display name. 6 Defines a list of attributes to use as a preferred user name. 7 Defines the ID of a client registered with the OpenID provider. You must allow the client to redirect to the https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name> URL. 8 Defines a secret of a client registered with the OpenID provider. 9 The Issuer Identifier described in the OpenID spec. You must use https without query or fragment component. 10 Defines a mapping method that controls how mappings are established between identities of this provider and User objects. Save the file to apply the changes. 3.2. Configuring the OAuth server for a hosted cluster by using the web console You can configure the internal OAuth server for your hosted cluster by using the OpenShift Container Platform web console. You can configure OAuth for the following supported identity providers: oidc htpasswd keystone ldap basic-authentication request-header github gitlab google Adding any identity provider in the OAuth configuration removes the default kubeadmin user provider. Note When you configure identity providers, you must configure at least one NodePool replica in your hosted cluster in advance. Traffic for DNS resolution is sent through the worker nodes. You do not need to configure the NodePool replicas in advance for the htpasswd and request-header identity providers. Prerequisites You logged in as a user with cluster-admin privileges. You created your hosted cluster. Procedure Navigate to Home API Explorer . Use the Filter by kind box to search for your HostedCluster resource. Click the HostedCluster resource that you want to edit. Click the Instances tab. Click the Options menu to your hosted cluster name entry and click Edit HostedCluster . Add the OAuth configuration in the YAML file: spec: configuration: oauth: identityProviders: - openID: 1 claims: email: 2 - <email_address> name: 3 - <display_name> preferredUsername: 4 - <preferred_username> clientID: <client_id> 5 clientSecret: name: <client_id_secret_name> 6 issuer: https://example.com/identity 7 mappingMethod: lookup 8 name: IAM type: OpenID 1 This provider name is prefixed to the value of the identity claim to form an identity name. The provider name is also used to build the redirect URL. 2 Defines a list of attributes to use as the email address. 3 Defines a list of attributes to use as a display name. 4 Defines a list of attributes to use as a preferred user name. 5 Defines the ID of a client registered with the OpenID provider. You must allow the client to redirect to the https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name> URL. 6 Defines a secret of a client registered with the OpenID provider. 7 The Issuer Identifier described in the OpenID spec. You must use https without query or fragment component. 8 Defines a mapping method that controls how mappings are established between identities of this provider and User objects. Click Save . Additional resources To know more about supported identity providers, see "Understanding identity provider configuration" in Authentication and authorization . | [
"oc edit <hosted_cluster_name> -n <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: oauth: identityProviders: - openID: 3 claims: email: 4 - <email_address> name: 5 - <display_name> preferredUsername: 6 - <preferred_username> clientID: <client_id> 7 clientSecret: name: <client_id_secret_name> 8 issuer: https://example.com/identity 9 mappingMethod: lookup 10 name: IAM type: OpenID",
"spec: configuration: oauth: identityProviders: - openID: 1 claims: email: 2 - <email_address> name: 3 - <display_name> preferredUsername: 4 - <preferred_username> clientID: <client_id> 5 clientSecret: name: <client_id_secret_name> 6 issuer: https://example.com/identity 7 mappingMethod: lookup 8 name: IAM type: OpenID"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/hosted_control_planes/authentication-and-authorization-for-hosted-control-planes |
7.10. Creating a Virtual Machine Based on a Template | 7.10. Creating a Virtual Machine Based on a Template Create a virtual machine from a template to enable the virtual machines to be pre-configured with an operating system, network interfaces, applications and other resources. Note Virtual machines created from a template depend on that template. This means that you cannot remove a template from the Manager if a virtual machine was created from that template. However, you can clone a virtual machine from a template to remove the dependency on that template. See Section 7.11, "Creating a Cloned Virtual Machine Based on a Template" for more information. Creating a Virtual Machine Based on a Template Click Compute Virtual Machines . Click New . Select the Cluster on which the virtual machine will run. Select a template from the Template list. Enter a Name , Description , and any Comments , and accept the default values inherited from the template in the rest of the fields. You can change them if needed. Click the Resource Allocation tab. Select the Thin or Clone radio button in the Storage Allocation area. If you select Thin , the disk format is QCOW2. If you select Clone , select either QCOW2 or Raw for disk format. Use the Target drop-down list to select the storage domain on which the virtual machine's virtual disk will be stored. Click OK . The virtual machine is displayed in the Virtual Machines tab. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/creating_a_virtual_machine_based_on_a_template |
Chapter 2. Prerequisites | Chapter 2. Prerequisites On RHEL Atomic Host , atomic is part of the OSTree and is ready to use. On Red Hat Enterprise Linux , make sure you have covered the following: Subscribe the system to the Extras channel which provides the atomic package. For Red Hat Subscription Management run this command: If you are using Satellite, run: Install atomic using Yum: Make sure the docker service is running: If the output states "inactive", use the following command: Note On both systems, you need to have root privileges to use atomic . | [
"subscription-manager repos --enable rhel-7-server-extras-rpms",
"rhn-channel --add --channel rhel-x86_64-server-extras-7",
"yum install atomic",
"systemctl status docker",
"systemctl start docker"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/cli_reference/prerequisites |
Chapter 4. Configuring and setting up remote jobs | Chapter 4. Configuring and setting up remote jobs Red Hat Satellite supports remote execution of commands on hosts. Using remote execution, you can perform various tasks on multiple hosts simultaneously. 4.1. Remote execution in Red Hat Satellite With remote execution, you can run jobs on hosts from Capsules by using shell scripts or Ansible roles and playbooks. Use remote execution for the following benefits in Satellite: Run jobs on multiple hosts at once. Use variables in your commands for more granular control over the jobs you run. Use host facts and parameters to populate the variable values. Specify custom values for templates when you run the command. Communication for remote execution occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. For more information, see Section 4.4, "Transport modes for remote execution" . To use remote execution, you must define a job template. A job template is a command that you want to apply to remote hosts. You can execute a job template multiple times. Satellite uses ERB syntax job templates. For more information, see Template Writing Reference in Managing hosts . By default, Satellite includes several job templates for shell scripts and Ansible. For more information, see Setting up Job Templates in Managing hosts . Additional resources See Executing a Remote Job in Managing hosts . 4.2. Remote execution workflow For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on your Capsule Server. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed. When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use. Satellite searches only for Capsules that have the Ansible feature enabled. Satellite finds the host's interfaces that have the Remote execution checkbox selected. Satellite finds the subnets of these interfaces. Satellite finds remote execution Capsules assigned to these subnets. From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules. If you have enabled Prefer registered through Capsule for remote execution , Satellite runs the REX job by using the Capsule to which the host is registered. By default, Prefer registered through Capsule for remote execution is set to No . To enable it, in the Satellite web UI, navigate to Administer > Settings , and on the Content tab, set Prefer registered through Capsule for remote execution to Yes . This ensures that Satellite performs REX jobs on hosts by the Capsule to which they are registered to. If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. 4.3. Permissions for remote execution You can control which roles can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles: Remote Execution Manager : Can access all remote execution features and functionality. Remote Execution User : Can only run jobs. You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates permission on a customized role, you can only see and trigger jobs based on matching job templates. You can use the view_hosts and view_smart_proxies permissions to limit which hosts or Capsules are visible to the role. The execute_template_invocation permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. You can run remote execution jobs against Red Hat Satellite and Capsule registered as hosts to Red Hat Satellite with the execute_jobs_on_infrastructure_hosts permission. Standard Manager and Site Manager roles have this permission by default. If you use either the Manager or Site Manager role, or if you use a custom role with the execute_jobs_on_infrastructure_hosts permission, you can execute remote jobs against registered Red Hat Satellite and Capsule hosts. For more information on working with roles and permissions, see Creating and Managing Roles in Administering Red Hat Satellite . The following example shows filters for the execute_template_invocation permission: Use the first line in this example to apply the Reboot template to one selected host. Use the second line to define a pool of hosts with names ending with .staging.example.com . Use the third line to bind the template with a host group. Note Permissions assigned to users with these roles can change over time. If you have already scheduled some jobs to run in the future, and the permissions change, this can result in execution failure because permissions are checked immediately before job execution. 4.4. Transport modes for remote execution You can configure your Satellite to use two different modes of transport for remote job execution. You can configure single Capsule to use either one mode or the other but not both. Push-based transport On Capsules in ssh mode, remote execution uses the SSH service to transport job details. This is the default transport mode. The SSH service must be enabled and active on the target hosts. The remote execution Capsule must have access to the SSH port on the target hosts. Unless you have a different setting, the standard SSH port is 22. This transport mode supports both Script and Ansible providers. Pull-based transport On Capsules in pull-mqtt mode, remote execution uses Message Queueing Telemetry Transport (MQTT) to initiate the job execution it receives from Satellite Server. The host subscribes to the MQTT broker on Capsule for job notifications by using the yggdrasil pull client. After the host receives a notification from the MQTT broker, it pulls job details from Capsule over HTTPS, runs the job, and reports results back to Capsule. This transport mode supports the Script provider only. To use the pull-mqtt mode, you must enable it on Capsule Server and configure the pull client on hosts. Note If your Capsule already uses the pull-mqtt mode and you want to switch back to the ssh mode, run this satellite-installer command: Additional resources To enable pull mode on Capsule Server, see Configuring pull-based transport for remote execution in Installing Capsule Server . To enable pull mode on a registered host, continue with Section 4.5, "Configuring a host to use the pull client" . To enable pull mode on a new host, continue with the following in Managing hosts : Creating a Host Registering Hosts 4.5. Configuring a host to use the pull client For Capsules configured to use pull-mqtt mode, hosts can subscribe to remote jobs using the remote execution pull client. Hosts do not require an SSH connection from their Capsule Server. Prerequisites You have registered the host to Satellite. The Capsule through which the host is registered is configured to use pull-mqtt mode. For more information, see Configuring pull-based transport for remote execution in Installing Capsule Server . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . The host can communicate with its Capsule over MQTT using port 1883 . The host can communicate with its Capsule over HTTPS. Procedure Install the katello-pull-transport-migrate package on your host: On Red Hat Enterprise Linux 9 and Red Hat Enterprise Linux 8 hosts: On Red Hat Enterprise Linux 7 hosts: The package installs foreman_ygg_worker and yggdrasil as dependencies, configures the yggdrasil client, and starts the pull client worker on the host. Verification Check the status of the yggdrasild service: 4.6. Creating a job template Use this procedure to create a job template. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Templates > Job templates . Click New Job Template . Click the Template tab, and in the Name field, enter a unique name for your job template. Select Default to make the template available for all organizations and locations. Create the template directly in the template editor or upload it from a text file by clicking Import . Optional: In the Audit Comment field, add information about the change. Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories in Managing hosts . Optional: In the Description Format field, enter a description template. For example, Install package %{package_name} . You can also use %{template_name} and %{job_category} in your template. From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks. Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete. Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab. Optional: Click Foreign input set to include other templates in this job. Optional: In the Effective user area, configure a user if the command cannot use the default remote_execution_effective_user setting. Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet . Optional: If you use the Ansible provider, click the Ansible tab. Select Enable Ansible Callback to allow hosts to send facts, which are used to create configuration reports, back to Satellite after a job finishes. Click the Location tab and add the locations where you want to use the template. Click the Organizations tab and add the organizations where you want to use the template. Click Submit to save your changes. You can extend and customize job templates by including other templates in the template syntax. For more information, see Template Writing Reference and Job Template Examples and Extensions in Managing hosts . CLI procedure To create a job template using a template-definition file, enter the following command: 4.7. Importing an Ansible Playbook by name You can import Ansible Playbooks by name to Satellite from collections installed on Capsule. Satellite creates a job template from the imported playbook and places the template in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Fetch the available Ansible Playbooks by using the following API request: Select the Ansible Playbook you want to import and note its name. Import the Ansible Playbook by its name: You get a notification in the Satellite web UI after the import completes. steps You can run the playbook by executing a remote job from the created job template. For more information, see Section 4.21, "Executing a remote job" . 4.8. Importing all available Ansible Playbooks You can import all the available Ansible Playbooks to Satellite from collections installed on Capsule. Satellite creates job templates from the imported playbooks and places the templates in the Ansible Playbook - Imported job category. If you have a custom collection, place it in /etc/ansible/collections/ansible_collections/ My_Namespace / My_Collection . Prerequisites Ansible plugin is enabled. Your Satellite account has a role that grants the import_ansible_playbooks permission. Procedure Import the Ansible Playbooks by using the following API request: You get a notification in the Satellite web UI after the import completes. steps You can run the playbooks by executing a remote job from the created job templates. For more information, see Section 4.21, "Executing a remote job" . 4.9. Configuring the fallback to any Capsule remote execution setting in Satellite You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled. If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following: DHCP, DNS and TFTP Capsules assigned to the host's subnets DNS Capsule assigned to the host's domain Realm Capsule assigned to the host's realm Puppet server Capsule Puppet CA Capsule OpenSCAP Capsule Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Fallback to Any Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Fallback to Any Capsule setting. To set the value to true , enter the following command: 4.10. Configuring the global Capsule remote execution setting in Satellite By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets. If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host's organization and location to execute a remote job. Procedure In the Satellite web UI, navigate to Administer > Settings . Click Remote Execution . Configure the Enable Global Capsule setting. CLI procedure Enter the hammer settings set command on Satellite to configure the Enable Global Capsule setting. To set the value to true , enter the following command: 4.11. Configuring Satellite to use an alternative directory to execute remote jobs on hosts Ansible puts its own files it requires on the server side into the /tmp directory. You have the option to set a different directory if required. Procedure On your Satellite Server or Capsule Server, create a new directory: Copy the SELinux context from the default /tmp directory: Configure your Satellite Server or Capsule Server to use the new directory: 4.12. Altering the privilege elevation method By default, push-based remote execution uses sudo to switch from the SSH user to the effective user that executes the script on your host. In some situations, you might require to use another method, such as su or dzdo . You can globally configure an alternative method in your Satellite settings. Prerequisites Your user account has a role assigned that grants the view_settings and edit_settings permissions. If you want to use dzdo for Ansible jobs, ensure the community.general Ansible collection, which contains the required dzdo become plugin, is installed. For more information, see Installing collections in Ansible documentation . Procedure Navigate to Administer > Settings . Select the Remote Execution tab. Click the value of the Effective User Method setting. Select the new value. Click Submit . 4.13. Distributing SSH keys for remote execution For Capsules in ssh mode, remote execution connections are authenticated using SSH. The public SSH key from Capsule must be distributed to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22. Use one of the following methods to distribute the public SSH key from Capsule to target hosts: Section 4.14, "Distributing SSH keys for remote execution manually" . Section 4.16, "Using the Satellite API to obtain SSH keys for remote execution" . Section 4.17, "Configuring a Kickstart template to distribute SSH keys during provisioning" . For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template in Managing hosts . Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default. If the hosts are running on Amazon Web Services, enable password authentication. For more information, see New User Accounts . 4.14. Distributing SSH keys for remote execution manually To distribute SSH keys manually, complete the following steps: Procedure Copy the SSH pub key from your Capsule to your target host: Repeat this step for each target host you want to manage. Verification To confirm that the key was successfully copied to the target host, enter the following command on Capsule: 4.15. Adding a passphrase to SSH key used for remote execution By default, Capsule uses a non-passphrase protected SSH key to execute remote jobs on hosts. You can protect the SSH key with a passphrase by following this procedure. Procedure On your Satellite Server or Capsule Server, use ssh-keygen to add a passphrase to your SSH key: steps Users now must use a passphrase when running remote execution jobs on hosts. 4.16. Using the Satellite API to obtain SSH keys for remote execution To use the Satellite API to download the public key from Capsule, complete this procedure on each target host. Procedure On the target host, create the ~/.ssh directory to store the SSH key: Download the SSH key from Capsule: Configure permissions for the ~/.ssh directory: Configure permissions for the authorized_keys file: 4.17. Configuring a Kickstart template to distribute SSH keys during provisioning You can add a remote_execution_ssh_keys snippet to your custom Kickstart template to deploy SSH keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Satellite copies the SSH key for remote execution to the systems during provisioning. Procedure To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use: 4.18. Configuring a keytab for Kerberos ticket granting tickets Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets. Procedure Find the ID of the foreman-proxy user: Modify the umask value so that new files have the permissions 600 : Create the directory for the keytab: Create a keytab or copy an existing keytab to the directory: Change the directory owner to the foreman-proxy user: Ensure that the keytab file is read-only: Restore the SELinux context: 4.19. Configuring Kerberos authentication for remote execution You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts. Prerequisites Enroll Satellite Server on the Kerberos server Enroll the Satellite target host on the Kerberos server Configure and initialize a Kerberos user account for remote execution Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket Procedure To install and enable Kerberos authentication for remote execution, enter the following command: To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the Remote Execution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account. Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account. Verification To confirm that Kerberos authentication is ready to use, run a remote job on the host. For more information, see Executing a Remote Job in Managing configurations by using Ansible integration . 4.20. Setting up job templates Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Templates > Job templates . If you want to use a template without making changes, proceed to Executing a Remote Job in Managing hosts . You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone. Procedure To clone a template, in the Actions column, select Clone . Enter a unique name for the clone and click Submit to save the changes. Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in Managing hosts . Ansible considerations To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with --- . You can embed an Ansible Playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible Playbooks in Satellite. For more information, see Synchronizing Repository Templates in Managing hosts . Parameter variables At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host's edit page can be used as input parameters for job templates. 4.21. Executing a remote job You can execute a job that is based on a job template against one or more hosts. Note Ansible jobs run in batches on multiple hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible Playbook runs on all hosts in the batch. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Monitor > Jobs and click Run job . Select the Job category and the Job template you want to use, then click . Select hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context. Note If you want to select a host group and all of its subgroups, it is not sufficient to select the host group as the job would only run on hosts directly in that group and not on hosts in subgroups. Instead, you must either select the host group and all of its subgroups or use this search query: Replace My_Host_Group with the name of the top-level host group. If required, provide inputs for the job template. Different templates have different inputs and some templates do not have any inputs. After entering all the required inputs, click . Optional: To configure advanced settings for the job, fill in the Advanced fields . To learn more about advanced settings, see Section 4.22, "Advanced settings in the job wizard" . Click . Schedule time for the job. To execute the job immediately, keep the pre-selected Immediate execution . To execute the job in future time, select Future execution . To execute the job on regular basis, select Recurring execution . Optional: If you selected future or recurring execution, select the Query type , otherwise click . Static query means that job executes on the exact list of hosts that you provided. Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter. Click after you have selected the query type. Optional: If you selected future or recurring execution, provide additional details: For Future execution , enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled. For Recurring execution , select the start date and time, frequency, and the condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time. Click after you have entered the required information. Review job details. You have the option to return to any part of the job wizard and edit the information. Click Submit to schedule the job for execution. CLI procedure Enter the following command on Satellite: Find the ID of the job template you want to use: Show the template details to see parameters required by your template: Execute a remote job with custom parameters: Replace My_Search_Query with the filter expression that defines hosts, for example "name ~ My_Pattern " . Additional resources For more information about creating, monitoring, or canceling remote jobs with Hammer CLI, enter hammer job-template --help and hammer job-invocation --help . 4.22. Advanced settings in the job wizard Some job templates require you to enter advanced settings. Some of the advanced settings are only visible to certain job templates. Below is the list of general advanced settings. SSH user A user to be used for connecting to the host through SSH. Effective user A user to be used for executing the job. By default it is the SSH user. If it differs from the SSH user, su or sudo, depending on your settings, is used to switch the accounts. If you set an effective user in the advanced settings, Ansible sets ansible_become_user to your input value and ansible_become to true . This means that if you use the parameters become: true and become_user: My_User within a playbook, these will be overwritten by Satellite. If your SSH user and effective user are identical, Satellite does not overwrite the become_user . Therefore, you can set a custom become_user in your Ansible Playbook. Description A description template for the job. Timeout to kill Time in seconds from the start of the job after which the job should be killed if it is not finished already. Time to pickup Time in seconds after which the job is canceled if it is not picked up by a client. This setting only applies to hosts using pull-mqtt transport. Password Is used if SSH authentication method is a password instead of the SSH key. Private key passphrase Is used if SSH keys are protected by a passphrase. Effective user password Is used if effective user is different from the ssh user. Concurrency level Defines the maximum number of jobs executed at once. This can prevent overload of system resources in a case of executing the job on a large number of hosts. Execution ordering Determines the order in which the job is executed on hosts. It can be alphabetical or randomized. 4.23. Using extended cron lines When scheduling a cron job with remote execution, you can use an extended cron line to specify the cadence of the job. The standard cron line contains five fields that specify minute, hour, day of the month, month, and day of the week. For example, 0 5 * * * stands for every day at 5 AM. The extended cron line provides the following features: You can use # to specify a concrete week day in a month For example: 0 0 * * mon#1 specifies first Monday of the month 0 0 * * fri#3,fri#4 specifies 3rd and 4th Fridays of the month 0 7 * * fri#-1 specifies the last Friday of the month at 07:00 0 7 * * fri#L also specifies the last Friday of the month at 07:00 0 23 * * mon#2,tue specifies the 2nd Monday of the month and every Tuesday, at 23:00 You can use % to specify every n-th day of the month For example: 9 0 * * sun%2 specifies every other Sunday at 00:09 0 0 * * sun%2+1 specifies every odd Sunday 9 0 * * sun%2,tue%3 specifies every other Sunday and every third Tuesday You can use & to specify that the day of the month has to match the day of the week For example: 0 0 30 * 1& specifies 30th day of the month, but only if it is Monday 4.24. Scheduling a recurring Ansible job for a host You can schedule a recurring job to run Ansible roles on hosts. Prerequisites Ensure you have the view_foreman_tasks , view_job_invocations , and view_recurring_logics permissions. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the target host on which you want to execute a remote job. On the Ansible tab, select Jobs . Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . Optional: View the scheduled Ansible job in host overview or by navigating to Ansible > Jobs . 4.25. Scheduling a recurring Ansible job for a host group You can schedule a recurring job to run Ansible roles on host groups. Procedure In the Satellite web UI, navigate to Configure > Host groups . In the Actions column, select Configure Ansible Job for the host group you want to schedule an Ansible roles run for. Click Schedule recurring job . Define the repetition frequency, start time, and date of the first run in the Create New Recurring Ansible Run window. Click Submit . 4.26. Using Ansible provider for package and errata actions By default, Satellite is configured to use the Script provider templates for remote execution jobs. If you prefer using Ansible job templates for your remote jobs, you can configure Satellite to use them by default for remote execution features associated with them. Note Remember that Ansible job templates only work when remote execution is configured for ssh mode. Procedure In the Satellite web UI, navigate to Administer > Remote Execution Features . Find each feature whose name contains by_search . Change the job template for these features from Katello Script Default to Katello Ansible Default . Click Submit . Satellite now uses Ansible provider templates for remote execution jobs by which you can perform package and errata actions. This applies to job invocations from the Satellite web UI as well as by using hammer job-invocation create with the same remote execution features that you have changed. 4.27. Setting the job rate limit on Capsule You can limit the maximum number of active jobs on a Capsule at a time to prevent performance spikes. The job is active from the time Capsule first tries to notify the host about the job until the job is finished on the host. The job rate limit only applies to mqtt based jobs. Note The optimal maximum number of active jobs depends on the computing resources of your Capsule Server. By default, the maximum number of active jobs is unlimited. Procedure Set the maximum number of active jobs using satellite-installer : For example: | [
"name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh",
"dnf install katello-pull-transport-migrate",
"yum install katello-pull-transport-migrate",
"systemctl status yggdrasild",
"hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH",
"curl --header 'Content-Type: application/json' --request GET https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_capsule_ID",
"curl --data '{ \"playbook_names\": [\" My_Playbook_Name \"] }' --header 'Content-Type: application/json' --request PUT https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID",
"curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID",
"hammer settings set --name=remote_execution_fallback_proxy --value=true",
"hammer settings set --name=remote_execution_global_proxy --value=true",
"mkdir /My_Remote_Working_Directory",
"chcon --reference=/tmp /My_Remote_Working_Directory",
"satellite-installer --foreman-proxy-plugin-ansible-working-dir /My_Remote_Working_Directory",
"ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]",
"ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]",
"ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy",
"mkdir ~/.ssh",
"curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys",
"chmod 700 ~/.ssh",
"chmod 600 ~/.ssh/authorized_keys",
"<%= snippet 'remote_execution_ssh_keys' %>",
"id -u foreman-proxy",
"umask 077",
"mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"",
"cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab",
"chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"",
"chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"",
"restorecon -RvF /var/kerberos/krb5",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true",
"hostgroup_fullname ~ \" My_Host_Group *\"",
"hammer settings set --name=remote_execution_global_proxy --value=false",
"hammer job-template list",
"hammer job-template info --id My_Template_ID",
"hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_ansible_integration/Configuring_and_Setting_Up_Remote_Jobs_ansible |
Chapter 32. Entitlement | Chapter 32. Entitlement subscription-manager component, BZ#910345 The default firstboot behavior is to prompt for Subscription Manager or Subscription Asset Manager (SAM) details. It no longer offers a path to use the Red Hat Network Classic registration tool. rhn-client-tools component, BZ#910345 The rhn-client-tools utility is no longer configured by default to communicate with xmlrpc.rhn.redhat.com , instead, it prompts for the user's Red Hat Satellite or Red Hat Proxy details to be entered. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/known-issues-entitlement |
Chapter 10. Visualizing logs | Chapter 10. Visualizing logs 10.1. About log visualization You can visualize your log data in the OpenShift Container Platform web console, or the Kibana web console, depending on your deployed log storage solution. The Kibana console can be used with ElasticSearch log stores, and the OpenShift Container Platform web console can be used with the ElasticSearch log store or the LokiStack. Note The Kibana web console is now deprecated is planned to be removed in a future logging release. 10.1.1. Configuring the log visualizer You can configure which log visualizer type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator. You have created a ClusterLogging CR. Important If you want to use the OpenShift Container Platform web console for visualization, you must enable the logging Console Plugin. See the documentation about "Log visualization with the web console". Procedure Modify the ClusterLogging CR visualization spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {} # ... 1 The type of visualizer you want to use for your logging. This can be either kibana or ocp-console . The Kibana console is only compatible with deployments that use Elasticsearch log storage, while the OpenShift Container Platform console is only compatible with LokiStack deployments. 2 Optional configurations for the Kibana console. 3 Optional configurations for the OpenShift Container Platform web console. Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.1.2. Viewing logs for a resource Resource logs are a default feature that provides limited log viewing capability. You can view the logs for various resources, such as builds, deployments, and pods by using the OpenShift CLI ( oc ) and the web console. Tip To enhance your log retrieving and viewing experience, install the logging. The logging aggregates all the logs from your OpenShift Container Platform cluster, such as node system audit logs, application container logs, and infrastructure logs, into a dedicated log store. You can then query, discover, and visualize your log data through the Kibana console or the OpenShift Container Platform web console. Resource logs do not access the logging log store. 10.1.2.1. Viewing resource logs You can view the log for various resources in the OpenShift CLI ( oc ) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI ( oc ). Procedure (UI) In the OpenShift Container Platform console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. 10.2. Log visualization with the web console You can use the OpenShift Container Platform web console to visualize log data by configuring the logging Console Plugin. Options for configuration are available during installation of logging on the web console. If you have already installed logging and want to configure the plugin, use one of the following procedures. 10.2.1. Enabling the logging Console Plugin after you have installed the Red Hat OpenShift Logging Operator You can enable the logging Console Plugin as part of the Red Hat OpenShift Logging Operator installation, but you can also enable the plugin if you have already installed the Red Hat OpenShift Logging Operator with the plugin disabled. Prerequisites You have administrator permissions. You have installed the Red Hat OpenShift Logging Operator and selected Disabled for the Console plugin . You have access to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console Administrator perspective, navigate to Operators Installed Operators . Click Red Hat OpenShift Logging . This takes you to the Operator Details page. In the Details page, click Disabled for the Console plugin option. In the Console plugin enablement dialog, select Enable . Click Save . Verify that the Console plugin option now shows Enabled . The web console displays a pop-up window when changes have been applied. The window prompts you to reload the web console. Refresh the browser when you see the pop-up window to apply the changes. 10.2.2. Configuring the logging Console Plugin when you have the Elasticsearch log store and LokiStack installed In logging version 5.8 and later, if the Elasticsearch log store is your default log store but you have also installed the LokiStack, you can enable the logging Console Plugin by using the following procedure. Prerequisites You have administrator permissions. You have installed the Red Hat OpenShift Logging Operator, the OpenShift Elasticsearch Operator, and the Loki Operator. You have installed the OpenShift CLI ( oc ). You have created a ClusterLogging custom resource (CR). Procedure Ensure that the logging Console Plugin is enabled by running the following command: USD oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin \ || oc patch consoles.operator.openshift.io cluster --type=merge \ --patch '{ "spec": { "plugins": ["logging-view-plugin"]}}' Add the .metadata.annotations.logging.openshift.io/ocp-console-migration-target: lokistack-dev annotation to the ClusterLogging CR, by running the following command: USD oc patch clusterlogging instance --type=merge --patch \ '{ "metadata": { "annotations": { "logging.openshift.io/ocp-console-migration-target": "lokistack-dev" }}}' \ -n openshift-logging Example output clusterlogging.logging.openshift.io/instance patched Verification Verify that the annotation was added successfully, by running the following command and observing the output: USD oc get clusterlogging instance \ -o=jsonpath='{.metadata.annotations.logging\.openshift\.io/ocp-console-migration-target}' \ -n openshift-logging Example output "lokistack-dev" The logging Console Plugin pod is now deployed. You can view logging data by navigating to the OpenShift Container Platform web console and viewing the Observe Logs page. 10.3. Viewing cluster dashboards The Logging/Elasticsearch Nodes and Openshift Logging dashboards in the OpenShift Container Platform web console contain in-depth details about your Elasticsearch instance and the individual Elasticsearch nodes that you can use to prevent and diagnose problems. The OpenShift Logging dashboard contains charts that show details about your Elasticsearch instance at a cluster level, including cluster resources, garbage collection, shards in the cluster, and Fluentd statistics. The Logging/Elasticsearch Nodes dashboard contains charts that show details about your Elasticsearch instance, many at node level, including details on indexing, shards, resources, and so forth. 10.3.1. Accessing the Elasticsearch and OpenShift Logging dashboards You can view the Logging/Elasticsearch Nodes and OpenShift Logging dashboards in the OpenShift Container Platform web console. Procedure To launch the dashboards: In the OpenShift Container Platform web console, click Observe Dashboards . On the Dashboards page, select Logging/Elasticsearch Nodes or OpenShift Logging from the Dashboard menu. For the Logging/Elasticsearch Nodes dashboard, you can select the Elasticsearch node you want to view and set the data resolution. The appropriate dashboard is displayed, showing multiple charts of data. Optional: Select a different time range to display or refresh rate for the data from the Time Range and Refresh Interval menus. For information on the dashboard charts, see About the OpenShift Logging dashboard and About the Logging/Elastisearch Nodes dashboard . 10.3.2. About the OpenShift Logging dashboard The OpenShift Logging dashboard contains charts that show details about your Elasticsearch instance at a cluster-level that you can use to diagnose and anticipate problems. Table 10.1. OpenShift Logging charts Metric Description Elastic Cluster Status The current Elasticsearch status: ONLINE - Indicates that the Elasticsearch instance is online. OFFLINE - Indicates that the Elasticsearch instance is offline. Elastic Nodes The total number of Elasticsearch nodes in the Elasticsearch instance. Elastic Shards The total number of Elasticsearch shards in the Elasticsearch instance. Elastic Documents The total number of Elasticsearch documents in the Elasticsearch instance. Total Index Size on Disk The total disk space that is being used for the Elasticsearch indices. Elastic Pending Tasks The total number of Elasticsearch changes that have not been completed, such as index creation, index mapping, shard allocation, or shard failure. Elastic JVM GC time The amount of time that the JVM spent executing Elasticsearch garbage collection operations in the cluster. Elastic JVM GC Rate The total number of times that JVM executed garbage activities per second. Elastic Query/Fetch Latency Sum Query latency: The average time each Elasticsearch search query takes to execute. Fetch latency: The average time each Elasticsearch search query spends fetching data. Fetch latency typically takes less time than query latency. If fetch latency is consistently increasing, it might indicate slow disks, data enrichment, or large requests with too many results. Elastic Query Rate The total queries executed against the Elasticsearch instance per second for each Elasticsearch node. CPU The amount of CPU used by Elasticsearch, Fluentd, and Kibana, shown for each component. Elastic JVM Heap Used The amount of JVM memory used. In a healthy cluster, the graph shows regular drops as memory is freed by JVM garbage collection. Elasticsearch Disk Usage The total disk space used by the Elasticsearch instance for each Elasticsearch node. File Descriptors In Use The total number of file descriptors used by Elasticsearch, Fluentd, and Kibana. FluentD emit count The total number of Fluentd messages per second for the Fluentd default output, and the retry count for the default output. FluentD Buffer Usage The percent of the Fluentd buffer that is being used for chunks. A full buffer might indicate that Fluentd is not able to process the number of logs received. Elastic rx bytes The total number of bytes that Elasticsearch has received from FluentD, the Elasticsearch nodes, and other sources. Elastic Index Failure Rate The total number of times per second that an Elasticsearch index fails. A high rate might indicate an issue with indexing. FluentD Output Error Rate The total number of times per second that FluentD is not able to output logs. 10.3.3. Charts on the Logging/Elasticsearch nodes dashboard The Logging/Elasticsearch Nodes dashboard contains charts that show details about your Elasticsearch instance, many at node-level, for further diagnostics. Elasticsearch status The Logging/Elasticsearch Nodes dashboard contains the following charts about the status of your Elasticsearch instance. Table 10.2. Elasticsearch status fields Metric Description Cluster status The cluster health status during the selected time period, using the Elasticsearch green, yellow, and red statuses: 0 - Indicates that the Elasticsearch instance is in green status, which means that all shards are allocated. 1 - Indicates that the Elasticsearch instance is in yellow status, which means that replica shards for at least one shard are not allocated. 2 - Indicates that the Elasticsearch instance is in red status, which means that at least one primary shard and its replicas are not allocated. Cluster nodes The total number of Elasticsearch nodes in the cluster. Cluster data nodes The number of Elasticsearch data nodes in the cluster. Cluster pending tasks The number of cluster state changes that are not finished and are waiting in a cluster queue, for example, index creation, index deletion, or shard allocation. A growing trend indicates that the cluster is not able to keep up with changes. Elasticsearch cluster index shard status Each Elasticsearch index is a logical group of one or more shards, which are basic units of persisted data. There are two types of index shards: primary shards, and replica shards. When a document is indexed into an index, it is stored in one of its primary shards and copied into every replica of that shard. The number of primary shards is specified when the index is created, and the number cannot change during index lifetime. You can change the number of replica shards at any time. The index shard can be in several states depending on its lifecycle phase or events occurring in the cluster. When the shard is able to perform search and indexing requests, the shard is active. If the shard cannot perform these requests, the shard is non-active. A shard might be non-active if the shard is initializing, reallocating, unassigned, and so forth. Index shards consist of a number of smaller internal blocks, called index segments, which are physical representations of the data. An index segment is a relatively small, immutable Lucene index that is created when Lucene commits newly-indexed data. Lucene, a search library used by Elasticsearch, merges index segments into larger segments in the background to keep the total number of segments low. If the process of merging segments is slower than the speed at which new segments are created, it could indicate a problem. When Lucene performs data operations, such as a search operation, Lucene performs the operation against the index segments in the relevant index. For that purpose, each segment contains specific data structures that are loaded in the memory and mapped. Index mapping can have a significant impact on the memory used by segment data structures. The Logging/Elasticsearch Nodes dashboard contains the following charts about the Elasticsearch index shards. Table 10.3. Elasticsearch cluster shard status charts Metric Description Cluster active shards The number of active primary shards and the total number of shards, including replicas, in the cluster. If the number of shards grows higher, the cluster performance can start degrading. Cluster initializing shards The number of non-active shards in the cluster. A non-active shard is one that is initializing, being reallocated to a different node, or is unassigned. A cluster typically has non-active shards for short periods. A growing number of non-active shards over longer periods could indicate a problem. Cluster relocating shards The number of shards that Elasticsearch is relocating to a new node. Elasticsearch relocates nodes for multiple reasons, such as high memory use on a node or after a new node is added to the cluster. Cluster unassigned shards The number of unassigned shards. Elasticsearch shards might be unassigned for reasons such as a new index being added or the failure of a node. Elasticsearch node metrics Each Elasticsearch node has a finite amount of resources that can be used to process tasks. When all the resources are being used and Elasticsearch attempts to perform a new task, Elasticsearch puts the tasks into a queue until some resources become available. The Logging/Elasticsearch Nodes dashboard contains the following charts about resource usage for a selected node and the number of tasks waiting in the Elasticsearch queue. Table 10.4. Elasticsearch node metric charts Metric Description ThreadPool tasks The number of waiting tasks in individual queues, shown by task type. A long-term accumulation of tasks in any queue could indicate node resource shortages or some other problem. CPU usage The amount of CPU being used by the selected Elasticsearch node as a percentage of the total CPU allocated to the host container. Memory usage The amount of memory being used by the selected Elasticsearch node. Disk usage The total disk space being used for index data and metadata on the selected Elasticsearch node. Documents indexing rate The rate that documents are indexed on the selected Elasticsearch node. Indexing latency The time taken to index the documents on the selected Elasticsearch node. Indexing latency can be affected by many factors, such as JVM Heap memory and overall load. A growing latency indicates a resource capacity shortage in the instance. Search rate The number of search requests run on the selected Elasticsearch node. Search latency The time taken to complete search requests on the selected Elasticsearch node. Search latency can be affected by many factors. A growing latency indicates a resource capacity shortage in the instance. Documents count (with replicas) The number of Elasticsearch documents stored on the selected Elasticsearch node, including documents stored in both the primary shards and replica shards that are allocated on the node. Documents deleting rate The number of Elasticsearch documents being deleted from any of the index shards that are allocated to the selected Elasticsearch node. Documents merging rate The number of Elasticsearch documents being merged in any of index shards that are allocated to the selected Elasticsearch node. Elasticsearch node fielddata Fielddata is an Elasticsearch data structure that holds lists of terms in an index and is kept in the JVM Heap. Because fielddata building is an expensive operation, Elasticsearch caches the fielddata structures. Elasticsearch can evict a fielddata cache when the underlying index segment is deleted or merged, or if there is not enough JVM HEAP memory for all the fielddata caches. The Logging/Elasticsearch Nodes dashboard contains the following charts about Elasticsearch fielddata. Table 10.5. Elasticsearch node fielddata charts Metric Description Fielddata memory size The amount of JVM Heap used for the fielddata cache on the selected Elasticsearch node. Fielddata evictions The number of fielddata structures that were deleted from the selected Elasticsearch node. Elasticsearch node query cache If the data stored in the index does not change, search query results are cached in a node-level query cache for reuse by Elasticsearch. The Logging/Elasticsearch Nodes dashboard contains the following charts about the Elasticsearch node query cache. Table 10.6. Elasticsearch node query charts Metric Description Query cache size The total amount of memory used for the query cache for all the shards allocated to the selected Elasticsearch node. Query cache evictions The number of query cache evictions on the selected Elasticsearch node. Query cache hits The number of query cache hits on the selected Elasticsearch node. Query cache misses The number of query cache misses on the selected Elasticsearch node. Elasticsearch index throttling When indexing documents, Elasticsearch stores the documents in index segments, which are physical representations of the data. At the same time, Elasticsearch periodically merges smaller segments into a larger segment as a way to optimize resource use. If the indexing is faster then the ability to merge segments, the merge process does not complete quickly enough, which can lead to issues with searches and performance. To prevent this situation, Elasticsearch throttles indexing, typically by reducing the number of threads allocated to indexing down to a single thread. The Logging/Elasticsearch Nodes dashboard contains the following charts about Elasticsearch index throttling. Table 10.7. Index throttling charts Metric Description Indexing throttling The amount of time that Elasticsearch has been throttling the indexing operations on the selected Elasticsearch node. Merging throttling The amount of time that Elasticsearch has been throttling the segment merge operations on the selected Elasticsearch node. Node JVM Heap statistics The Logging/Elasticsearch Nodes dashboard contains the following charts about JVM Heap operations. Table 10.8. JVM Heap statistic charts Metric Description Heap used The amount of the total allocated JVM Heap space that is used on the selected Elasticsearch node. GC count The number of garbage collection operations that have been run on the selected Elasticsearch node, by old and young garbage collection. GC time The amount of time that the JVM spent running garbage collection operations on the selected Elasticsearch node, by old and young garbage collection. 10.4. Log visualization with Kibana If you are using the ElasticSearch log store, you can use the Kibana console to visualize collected log data. Using Kibana, you can do the following with your data: Search and browse the data using the Discover tab. Chart and map the data using the Visualize tab. Create and view custom dashboards using the Dashboard tab. Use and configuration of the Kibana interface is beyond the scope of this documentation. For more information about using the interface, see the Kibana documentation . Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. 10.4.1. Defining Kibana index patterns An index pattern defines the Elasticsearch indices that you want to visualize. To explore and visualize data in Kibana, you must create an index pattern. Prerequisites A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices. If you can view the pods and logs in the default , kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions: USD oc auth can-i get pods --subresource log -n <project> Example output yes Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. Elasticsearch documents must be indexed before you can create index patterns. This is done automatically, but it might take a few minutes in a new or updated cluster. Procedure To define index patterns and create visualizations in Kibana: In the OpenShift Container Platform console, click the Application Launcher and select Logging . Create your Kibana index patterns by clicking Management Index Patterns Create index pattern : Each user must manually create index patterns when logging into Kibana the first time to see logs for their projects. Users must create an index pattern named app and use the @timestamp time field to view their container logs. Each admin user must create index patterns when logged into Kibana the first time for the app , infra , and audit indices using the @timestamp time field. Create Kibana Visualizations from the new index patterns. 10.4.2. Viewing cluster logs in Kibana You view cluster logs in the Kibana web console. The methods for viewing and visualizing your data in Kibana that are beyond the scope of this documentation. For more information, refer to the Kibana documentation . Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Kibana index patterns must exist. A user must have the cluster-admin role, the cluster-reader role, or both roles to view the infra and audit indices in Kibana. The default kubeadmin user has proper permissions to view these indices. If you can view the pods and logs in the default , kube- and openshift- projects, you should be able to access these indices. You can use the following command to check if the current user has appropriate permissions: USD oc auth can-i get pods --subresource log -n <project> Example output yes Note The audit logs are not stored in the internal OpenShift Container Platform Elasticsearch instance by default. To view the audit logs in Kibana, you must use the Log Forwarding API to configure a pipeline that uses the default output for audit logs. Procedure To view logs in Kibana: In the OpenShift Container Platform console, click the Application Launcher and select Logging . Log in using the same credentials you use to log in to the OpenShift Container Platform console. The Kibana interface launches. In Kibana, click Discover . Select the index pattern you created from the drop-down menu in the top-left corner: app , audit , or infra . The log data displays as time-stamped documents. Expand one of the time-stamped documents. Click the JSON tab to display the log entry for that document. Example 10.1. Sample infrastructure log entry in Kibana { "_index": "infra-000001", "_type": "_doc", "_id": "YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3", "_version": 1, "_score": null, "_source": { "docker": { "container_id": "f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1" }, "kubernetes": { "container_name": "registry-server", "namespace_name": "openshift-marketplace", "pod_name": "redhat-marketplace-n64gc", "container_image": "registry.redhat.io/redhat/redhat-marketplace-index:v4.7", "container_image_id": "registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f", "pod_id": "8f594ea2-c866-4b5c-a1c8-a50756704b2a", "host": "ip-10-0-182-28.us-east-2.compute.internal", "master_url": "https://kubernetes.default.svc", "namespace_id": "3abab127-7669-4eb3-b9ef-44c04ad68d38", "namespace_labels": { "openshift_io/cluster-monitoring": "true" }, "flat_labels": [ "catalogsource_operators_coreos_com/update=redhat-marketplace" ] }, "message": "time=\"2020-09-23T20:47:03Z\" level=info msg=\"serving registry\" database=/database/index.db port=50051", "level": "unknown", "hostname": "ip-10-0-182-28.internal", "pipeline_metadata": { "collector": { "ipaddr4": "10.0.182.28", "inputname": "fluent-plugin-systemd", "name": "fluentd", "received_at": "2020-09-23T20:47:15.007583+00:00", "version": "1.7.4 1.6.0" } }, "@timestamp": "2020-09-23T20:47:03.422465+00:00", "viaq_msg_id": "YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3", "openshift": { "labels": { "logging": "infra" } } }, "fields": { "@timestamp": [ "2020-09-23T20:47:03.422Z" ], "pipeline_metadata.collector.received_at": [ "2020-09-23T20:47:15.007Z" ] }, "sort": [ 1600894023422 ] } 10.4.3. Configuring Kibana You can configure using the Kibana console by modifying the ClusterLogging custom resource (CR). 10.4.3.1. Configuring CPU and memory limits The logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed. 10.4.3.2. Scaling redundancy for the log visualizer nodes You can scale the pod that hosts the log visualizer for redundancy. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging .... spec: visualization: type: "kibana" kibana: replicas: 1 1 1 Specify the number of Kibana nodes. | [
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: visualization: type: <visualizer_type> 1 kibana: 2 resources: {} nodeSelector: {} proxy: {} replicas: {} tolerations: {} ocpConsole: 3 logsLimit: {} timeout: {}",
"oc apply -f <filename>.yaml",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"oc get consoles.operator.openshift.io cluster -o yaml |grep logging-view-plugin || oc patch consoles.operator.openshift.io cluster --type=merge --patch '{ \"spec\": { \"plugins\": [\"logging-view-plugin\"]}}'",
"oc patch clusterlogging instance --type=merge --patch '{ \"metadata\": { \"annotations\": { \"logging.openshift.io/ocp-console-migration-target\": \"lokistack-dev\" }}}' -n openshift-logging",
"clusterlogging.logging.openshift.io/instance patched",
"oc get clusterlogging instance -o=jsonpath='{.metadata.annotations.logging\\.openshift\\.io/ocp-console-migration-target}' -n openshift-logging",
"\"lokistack-dev\"",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"oc auth can-i get pods --subresource log -n <project>",
"yes",
"{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd",
"oc -n openshift-logging edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging . spec: visualization: type: \"kibana\" kibana: replicas: 1 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/logging/visualizing-logs |
Chapter 3. Tuning Satellite Server with predefined profiles | Chapter 3. Tuning Satellite Server with predefined profiles If your Satellite deployment includes more than 5000 hosts, you can use predefined tuning profiles to improve performance of Satellite. Note that you cannot use tuning profiles on Capsules. You can choose one of the profiles depending on the number of hosts your Satellite manages and available hardware resources. The tuning profiles are available in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes directory. When you run the satellite-installer command with the --tuning option, deployment configuration settings are applied to Satellite in the following order: The default tuning profile defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml file The tuning profile that you want to apply to your deployment and is defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ directory Optional: If you have configured a /etc/foreman-installer/custom-hiera.yaml file, Satellite applies these configuration settings. Note that the configuration settings that are defined in the /etc/foreman-installer/custom-hiera.yaml file override the configuration settings that are defined in the tuning profiles. Therefore, before applying a tuning profile, you must compare the configuration settings that are defined in the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml , the tuning profile that you want to apply and your /etc/foreman-installer/custom-hiera.yaml file, and remove any duplicated configuration from the /etc/foreman-installer/custom-hiera.yaml file. default Number of hosts: 0 - 5000 RAM: 20G Number of CPU cores: 4 medium Number of hosts: 5001 - 10000 RAM: 32G Number of CPU cores: 8 large Number of hosts: 10001 - 20000 RAM: 64G Number of CPU cores: 16 extra-large Number of hosts: 20001 - 60000 RAM: 128G Number of CPU cores: 32 extra-extra-large Number of hosts: 60000+ RAM: 256G Number of CPU cores: 48+ Procedure Optional: If you have configured the custom-hiera.yaml file on Satellite Server, back up the /etc/foreman-installer/custom-hiera.yaml file to custom-hiera.original . You can use the backup file to restore the /etc/foreman-installer/custom-hiera.yaml file to its original state if it becomes corrupted: Optional: If you have configured the custom-hiera.yaml file on Satellite Server, review the definitions of the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml and the tuning profile that you want to apply in /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ . Compare the configuration entries against the entries in your /etc/foreman-installer/custom-hiera.yaml file and remove any duplicated configuration settings in your /etc/foreman-installer/custom-hiera.yaml file. Enter the satellite-installer command with the --tuning option for the profile that you want to apply. For example, to apply the medium tuning profile settings, enter the following command: | [
"cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original",
"satellite-installer --tuning medium"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/tuning-with-predefined-profiles_admin |
6.3. Using an Advanced Externalizer | 6.3. Using an Advanced Externalizer Using a customized advanced externalizer helps optimize performance in Red Hat JBoss Data Grid. Define and implement the readObject() and writeObject() methods. Link externalizers with marshaller classes. Register the advanced externalizer. Report a bug 6.3.1. Implement the Methods To use advanced externalizers, define and implement the readObject() and writeObject() methods. The following is a sample definition: Example 6.2. Define and Implement the Methods Note This method does not require annotated user classes. As a result, this method is valid for classes where the source code is not available or cannot be modified. Report a bug 6.3.2. Link Externalizers with Marshaller Classes Use an implementation of getTypeClasses() to discover the classes that this externalizer can marshall and to link the readObject() and writeObject() classes. The following is a sample implementation: In the provided sample, the ReplicableCommandExternalizer indicates that it can externalize several command types. This sample marshalls all commands that extend the ReplicableCommand interface but the framework only supports class equality comparison so it is not possible to indicate that the classes marshalled are all children of a particular class or interface. In some cases, the class to be externalized is private and therefore the class instance is not accessible. In such a situation, look up the class with the provided fully qualified class name and pass it back. An example of this is as follows: Report a bug 6.3.3. Register the Advanced Externalizer (Declaratively) After the advanced externalizer is set up, register it for use with Red Hat JBoss Data Grid. This registration is done declaratively (via XML) as follows: Procedure 6.1. Register the Advanced Externalizer Add the global element to the infinispan element. Add the serialization element to the global element. Add the advancedExternalizers element to add information about the new advanced externalizer. Define the externalizer class using the externalizerClass attributes. Replace the USDIdViaAnnotationObj and USDAdvancedExternalizer values as required. Report a bug 6.3.4. Register the Advanced Externalizer (Programmatically) After the advanced externalizer is set up, register it for use with Red Hat JBoss Data Grid. This registration is done programmatically as follows: Example 6.3. Registering the Advanced Externalizer Programmatically Enter the desired information for the GlobalConfigurationBuilder in the first line. Report a bug 6.3.5. Register Multiple Externalizers Alternatively, register multiple advanced externalizers because GlobalConfiguration.addExternalizer() accepts varargs . Before registering the new externalizers, ensure that their IDs are already defined using the @Marshalls annotation. Example 6.4. Registering Multiple Externalizers Report a bug | [
"import org.infinispan.commons.marshall.AdvancedExternalizer; public class Person { final String name; final int age; public Person(String name, int age) { this.name = name; this.age = age; } public static class PersonExternalizer implements AdvancedExternalizer<Person> { @Override public void writeObject(ObjectOutput output, Person person) throws IOException { output.writeObject(person.name); output.writeInt(person.age); } @Override public Person readObject(ObjectInput input) throws IOException, ClassNotFoundException { return new Person((String) input.readObject(), input.readInt()); } @Override public Set<Class<? extends Person>> getTypeClasses() { return Util.<Class<? extends Person>>asSet(Person.class); } @Override public Integer getId() { return 2345; } } }",
"import org.infinispan.util.Util; <!-- Additional configuration information here --> @Override public Set<Class<? extends ReplicableCommand>> getTypeClasses() { return Util.asSet(LockControlCommand.class, GetKeyValueCommand.class, ClusteredGetCommand.class, MultipleRpcCommand.class, SingleRpcCommand.class, CommitCommand.class, PrepareCommand.class, RollbackCommand.class, ClearCommand.class, EvictCommand.class, InvalidateCommand.class, InvalidateL1Command.class, PutKeyValueCommand.class, PutMapCommand.class, RemoveCommand.class, ReplaceCommand.class); }",
"@Override public Set<Class<? extends List>> getTypeClasses() { return Util.<Class<? extends List>>asSet( Util.<List>loadClass(\"java.util.CollectionsUSDSingletonList\", null)); }",
"<infinispan> <global> <serialization> <advancedExternalizers> <advancedExternalizer externalizerClass=\"org.infinispan.marshall.AdvancedExternalizerTestUSDIdViaAnnotationObjUSDExternalizer\"/> </advancedExternalizers> </serialization> </global> </infinispan>",
"GlobalConfigurationBuilder builder = builder.serialization() .addAdvancedExternalizer(new Person.PersonExternalizer());",
"builder.serialization() .addAdvancedExternalizer(new Person.PersonExternalizer(), new Address.AddressExternalizer());"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-using_an_advanced_externalizer |
Chapter 13. Configuring the cluster network range | Chapter 13. Configuring the cluster network range As a cluster administrator, you can expand the cluster network range after cluster installation. You might want to expand the cluster network range if you need more IP addresses for additional nodes. For example, if you deployed a cluster and specified 10.128.0.0/19 as the cluster network range and a host prefix of 23 , you are limited to 16 nodes. You can expand that to 510 nodes by changing the CIDR mask on a cluster to /14 . When expanding the cluster network address range, your cluster must use the OVN-Kubernetes network plugin . Other network plugins are not supported. The following limitations apply when modifying the cluster network IP address range: The CIDR mask size specified must always be smaller than the currently configured CIDR mask size, because you can only increase IP space by adding more nodes to an installed cluster The host prefix cannot be modified Pods that are configured with an overridden default gateway must be recreated after the cluster network expands 13.1. Expanding the cluster network IP address range You can expand the IP address range for the cluster network. Because this change requires rolling out a new Operator configuration across the cluster, it can take up to 30 minutes to take effect. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To obtain the cluster network range and host prefix for your cluster, enter the following command: USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}" Example output [{"cidr":"10.217.0.0/22","hostPrefix":23}] To expand the cluster network IP address range, enter the following command. Use the CIDR IP address range and host prefix returned from the output of the command. USD oc patch Network.config.openshift.io cluster --type='merge' --patch \ '{ "spec":{ "clusterNetwork": [ {"cidr":"<network>/<cidr>","hostPrefix":<prefix>} ], "networkType": "OVNKubernetes" } }' where: <network> Specifies the network part of the cidr field that you obtained from the step. You cannot change this value. <cidr> Specifies the network prefix length. For example, 14 . Change this value to a smaller number than the value from the output in the step to expand the cluster network range. <prefix> Specifies the current host prefix for your cluster. This value must be the same value for the hostPrefix field that you obtained from the step. Example command USD oc patch Network.config.openshift.io cluster --type='merge' --patch \ '{ "spec":{ "clusterNetwork": [ {"cidr":"10.217.0.0/14","hostPrefix": 23} ], "networkType": "OVNKubernetes" } }' Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take up to 30 minutes for this change to take effect. USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}" Example output [{"cidr":"10.217.0.0/14","hostPrefix":23}] 13.2. Additional resources Red Hat OpenShift Network Calculator About the OVN-Kubernetes network plugin | [
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"",
"[{\"cidr\":\"10.217.0.0/22\",\"hostPrefix\":23}]",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"<network>/<cidr>\",\"hostPrefix\":<prefix>} ], \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"10.217.0.0/14\",\"hostPrefix\": 23} ], \"networkType\": \"OVNKubernetes\" } }'",
"network.config.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"",
"[{\"cidr\":\"10.217.0.0/14\",\"hostPrefix\":23}]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/configuring-cluster-network-range |
Chapter 3. Installing Satellite Server | Chapter 3. Installing Satellite Server When you install Satellite Server from a connected network, you can obtain packages and receive updates directly from the Red Hat Content Delivery Network. Note You cannot register Satellite Server to itself. Use the following procedures to install Satellite Server, perform the initial configuration, and import subscription manifests. For more information on subscription manifests, see Managing Red Hat Subscriptions in Managing content . Note that the Satellite installation script is based on Puppet, which means that if you run the installation script more than once, it might overwrite any manual configuration changes. To avoid this and determine which future changes apply, use the --noop argument when you run the installation script. This argument ensures that no actual changes are made. Potential changes are written to /var/log/foreman-installer/satellite.log . Files are always backed up and so you can revert any unwanted changes. For example, in the foreman-installer logs, you can see an entry similar to the following about Filebucket: You can restore the file as follows: 3.1. Configuring the HTTP proxy to connect to Red Hat CDN Prerequisites Your network gateway and the HTTP proxy must allow access to the following hosts: Host name Port Protocol subscription.rhsm.redhat.com 443 HTTPS cdn.redhat.com 443 HTTPS *.akamaiedge.net 443 HTTPS cert.console.redhat.com (if using Red Hat Insights) 443 HTTPS api.access.redhat.com (if using Red Hat Insights) 443 HTTPS cert-api.access.redhat.com (if using Red Hat Insights) 443 HTTPS Satellite Server uses SSL to communicate with the Red Hat CDN securely. An SSL interception proxy interferes with this communication. These hosts must be allowlisted on your HTTP proxy. For a list of IP addresses used by the Red Hat CDN (cdn.redhat.com), see the Knowledgebase article Public CIDR Lists for Red Hat on the Red Hat Customer Portal. To configure the Subscription Manager with the HTTP proxy, follow the procedure below. Procedure On Satellite Server, complete the following details in the /etc/rhsm/rhsm.conf file: 3.2. Registering to Red Hat Subscription Management Registering the host to Red Hat Subscription Management enables the host to subscribe to and consume content for any subscriptions available to the user. This includes content such as Red Hat Enterprise Linux and Red Hat Satellite. Procedure Register your system with the Red Hat Content Delivery Network, entering your Customer Portal user name and password when prompted: The command displays output similar to the following: 3.3. Configuring repositories Use these procedures to enable the repositories required to install Satellite Server. Disable all repositories: Enable the following repositories: Enable the DNF modules: Note If there is any warning about conflicts with Ruby or PostgreSQL while enabling satellite:el8 module, see Appendix A, Troubleshooting DNF modules . For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . 3.4. Optional: Using fapolicyd on Satellite Server By enabling fapolicyd on your Satellite Server, you can provide an additional layer of security by monitoring and controlling access to files and directories. The fapolicyd daemon uses the RPM database as a repository of trusted binaries and scripts. You can turn on or off the fapolicyd on your Satellite Server or Capsule Server at any point. 3.4.1. Installing fapolicyd on Satellite Server You can install fapolicyd along with Satellite Server or can be installed on an existing Satellite Server. If you are installing fapolicyd along with the new Satellite Server, the installation process will detect the fapolicyd in your Red Hat Enterprise Linux host and deploy the Satellite Server rules automatically. Prerequisites Ensure your host has access to the BaseOS repositories of Red Hat Enterprise Linux. Procedure For a new installation, install fapolicyd: For an existing installation, install fapolicyd using satellite-maintain packages install: Start the fapolicyd service: Verification Verify that the fapolicyd service is running correctly: New Satellite Server or Capsule Server installations In case of new Satellite Server or Capsule Server installation, follow the standard installation procedures after installing and enabling fapolicyd on your Red Hat Enterprise Linux host. Additional resources For more information on fapolicyd, see Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 8 Security hardening . 3.5. Installing Satellite Server packages Procedure Update all packages: Install Satellite Server packages: 3.6. Synchronizing the system clock with chronyd To minimize the effects of time drift, you must synchronize the system clock on the base operating system on which you want to install Satellite Server with Network Time Protocol (NTP) servers. If the base operating system clock is configured incorrectly, certificate verification might fail. For more information about the chrony suite, see Using the Chrony suite to configure NTP in Red Hat Enterprise Linux 8 Configuring basic system settings . Procedure Install the chrony package: Start and enable the chronyd service: 3.7. Configuring Satellite Server Install Satellite Server using the satellite-installer installation script. This method is performed by running the installation script with one or more command options. The command options override the corresponding default initial configuration options and are recorded in the Satellite answer file. You can run the script as often as needed to configure any necessary options. 3.7.1. Configuring Satellite installation This initial configuration procedure creates an organization, location, user name, and password. After the initial configuration, you can create additional organizations and locations if required. The initial configuration also installs PostgreSQL databases on the same server. The installation process can take tens of minutes to complete. If you are connecting remotely to the system, use a utility such as tmux that allows suspending and reattaching a communication session so that you can check the installation progress in case you become disconnected from the remote system. If you lose connection to the shell where the installation command is running, see the log at /var/log/foreman-installer/satellite.log to determine if the process completed successfully. Considerations Use the satellite-installer --scenario satellite --help command to display the most commonly used options and any default values. Use the satellite-installer --scenario satellite --full-help command to display advanced options. Specify a meaningful value for the option: --foreman-initial-organization . This can be your company name. An internal label that matches the value is also created and cannot be changed afterwards. If you do not specify a value, an organization called Default Organization with the label Default_Organization is created. You can rename the organization name but not the label. By default, all configuration files configured by the installer are managed. When satellite-installer runs, it overwrites any manual changes to the managed files with the intended values. This means that running the installer on a broken system should restore it to working order, regardless of changes made. For more information on how to apply custom configuration on other services, see Applying Custom Configuration to Satellite . Procedure Enter the following command with any additional options that you want to use: The script displays its progress and writes logs to /var/log/foreman-installer/satellite.log . 3.8. Importing a Red Hat subscription manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Note Simple Content Access (SCA) is set on the organization, not the manifest. Importing a manifest does not change your organization's Simple Content Access status. Prerequisites Ensure you have a Red Hat subscription manifest exported from the Red Hat Hybrid Cloud Console. For more information, see Creating and managing manifests for a connected Satellite Server in Subscription Central . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Choose File . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . CLI procedure Copy the Red Hat subscription manifest file from your local machine to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in Managing content . | [
"/Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket -l restore /etc/dhcp/dhcpd.conf 622d9820b8e764ab124367c68f5fa3a1",
"an http proxy server to use (enter server FQDN) proxy_hostname = myproxy.example.com port for http proxy server proxy_port = 8080 user name for authenticating to an http proxy, if needed proxy_user = password for basic http proxy auth, if needed proxy_password =",
"subscription-manager register",
"subscription-manager register Username: user_name Password: The system has been registered with ID: 541084ff2-44cab-4eb1-9fa1-7683431bcf9a",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms",
"dnf module enable satellite:el8",
"dnf install fapolicyd",
"satellite-maintain packages install fapolicyd",
"systemctl enable --now fapolicyd",
"systemctl status fapolicyd",
"dnf upgrade",
"dnf install satellite",
"dnf install chrony",
"systemctl enable --now chronyd",
"satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password",
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/installing_server_connected_satellite |
Chapter 6. Planning resource usage in your cluster | Chapter 6. Planning resource usage in your cluster 6.1. Planning your environment based on tested cluster maximums This document describes how to plan your Red Hat OpenShift Service on AWS environment based on the tested cluster maximums. Oversubscribing the physical resources on a node affects the resource guarantees that the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster. The numbers noted in this documentation are based on Red Hat testing methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments. While planning your environment, determine how many pods are expected to fit per node using the following formula: The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application's memory, CPU, and storage requirements, as described in Planning your environment based on application requirements . Example scenario If you want to scope your cluster for 2200 pods per cluster, you would need at least nine nodes, assuming that there are 250 maximum pods per node: If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node: Where: 6.2. Planning your environment based on application requirements This document describes how to plan your Red Hat OpenShift Service on AWS environment based on your application requirements. Consider an example application environment: Pod type Pod quantity Max memory CPU cores Persistent storage apache 100 500 MB 0.5 1 GB node.js 200 1 GB 1 1 GB postgresql 100 1 GB 2 10 GB JBoss EAP 100 1 GB 1 1 GB Extrapolated requirements: 550 CPU cores, 450 GB RAM, and 1.4 TB storage. Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered. Node type Quantity CPUs RAM (GB) Nodes (option 1) 100 4 16 Nodes (option 2) 50 8 32 Nodes (option 3) 25 16 64 Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio. The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, if the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing. Disable the service links in the deployment's service specification file to overcome this: Example Kind: Template apiVersion: template.openshift.io/v1 metadata: name: deploymentConfigTemplate creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: "USD{IMAGE}" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR2_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR3_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR4_USD{IDENTIFIER} value: "USD{ENV_VALUE}" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - kind: Service apiVersion: v1 metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 portalIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: "[A-Za-z0-9]{255}" required: false labels: template: deploymentConfigTemplate The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX on the system defines the maximum argument length for a new process and it is set to 2097152 bytes (2 MiB) by default. The kubelet injects environment variables in to each pod scheduled to run in the namespace including: <SERVICE_NAME>_SERVICE_HOST=<IP> <SERVICE_NAME>_SERVICE_PORT=<PORT> <SERVICE_NAME>_PORT=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp <SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR> The pods in the namespace start to fail if the argument length exceeds the allowed value and if the number of characters in a service name impacts it. | [
"required pods per cluster / pods per node = total number of nodes needed",
"2200 / 250 = 8.8",
"2200 / 20 = 110",
"required pods per cluster / total number of nodes = expected pods per node",
"Kind: Template apiVersion: template.openshift.io/v1 metadata: name: deploymentConfigTemplate creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - kind: Service apiVersion: v1 metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 portalIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deploymentConfigTemplate"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/prepare_your_environment/rosa-planning-environment |
Chapter 8. Instance and container groups | Chapter 8. Instance and container groups Automation controller enables you to execute jobs through Ansible playbooks run directly on a member of the cluster or in a namespace of an OpenShift cluster with the necessary service account provisioned. This is called a container group. You can execute jobs in a container group only as-needed per playbook. For more information, see Container groups . For execution environments, see Execution environments in the Automation controller User Guide . 8.1. Instance groups Instances can be grouped into one or more instance groups. Instance groups can be assigned to one or more of the following listed resources: Organizations Inventories Job templates When a job associated with one of the resources executes, it is assigned to the instance group associated with the resource. During the execution process, instance groups associated with job templates are checked before those associated with inventories. Instance groups associated with inventories are checked before those associated with organizations. Therefore, instance group assignments for the three resources form the hierarchy: Job Template > Inventory > Organization Consider the following when working with instance groups: You can define other groups and group instances in those groups. These groups must be prefixed with instance_group_ . Instances are required to be in the automationcontroller or execution_nodes group alongside other instance_group_ groups. In a clustered setup, at least one instance must be present in the automationcontroller group, which appears as controlplane in the API instance groups. For more information and example scenarios, see Group policies for automationcontroller . You cannot modify the controlplane instance group, and attempting to do so results in a permission denied error for any user. Therefore, the Disassociate option is not available in the Instances tab of controlplane . A default API instance group is automatically created with all nodes capable of running jobs. This is like any other instance group but if a specific instance group is not associated with a specific resource, then the job execution always falls back to the default instance group. The default instance group always exists, and you cannot delete or rename it. Do not create a group named instance_group_default . Do not name any instance the same as a group name. 8.1.1. Group policies for automationcontroller Use the following criteria when defining nodes: Nodes in the automationcontroller group can define node_type hostvar to be hybrid (default) or control . Nodes in the execution_nodes group can define node_type hostvar to be execution (default) or hop . You can define custom groups in the inventory file by naming groups with instance_group_* where * becomes the name of the group in the API. You can also create custom instance groups in the API after the install has finished. The current behavior expects a member of an instance_group_* to be part of automationcontroller or execution_nodes group. Example After you run the installer, the following error appears: TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] *** fatal: [126-addr.tatu.home -> localhost]: FAILED! => {"msg": "The host '110-addr.tatu.home' is not present in either [automationcontroller] or [execution_nodes]"} To fix this, move the box 110-addr.tatu.home to an execution_node group: [automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control [automationcontroller:vars] peers=execution_nodes [execution_nodes] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928 [instance_group_test] 110-addr.tatu.home This results in: TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] *** ok: [126-addr.tatu.home -> localhost] => {"changed": false, "mesh": {"110-addr.tatu.home": {"node_type": "execution", "peers": [], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": true, "receptor_listener_port": 8928, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}, "126-addr.tatu.home": {"node_type": "control", "peers": ["110-addr.tatu.home"], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": false, "receptor_listener_port": 27199, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}}} After you upgrade from automation controller 4.0 or earlier, the legacy instance_group_ member likely has the awx code installed. This places that node in the automationcontroller group. 8.1.2. Configure instance groups from the API You can create instance groups by POSTing to /api/v2/instance_groups as a system administrator. Once created, you can associate instances with an instance group using: HTTP POST /api/v2/instance_groups/x/instances/ {'id': y}` An instance that is added to an instance group automatically reconfigures itself to listen on the group's work queue. For more information, see the following section Instance group policies . 8.1.3. Instance group policies You can configure automation controller instances to automatically join instance groups when they come online by defining a policy. These policies are evaluated for every new instance that comes online. Instance group policies are controlled by the following three optional fields on an Instance Group : policy_instance_percentage : This is a number between 0 - 100. It guarantees that this percentage of active automation controller instances are added to this instance group. As new instances come online, if the number of instances in this group relative to the total number of instances is less than the given percentage, then new ones are added until the percentage condition is satisfied. policy_instance_minimum : This policy attempts to keep at least this many instances in the instance group. If the number of available instances is lower than this minimum, then all instances are placed in this instance group. policy_instance_list : This is a fixed list of instance names to always include in this instance group. The Instance Groups list view from the automation controller user interface (UI) provides a summary of the capacity levels for each instance group according to instance group policies: Additional resources For more information, see the Managing Instance Groups section of the Automation controller User Guide . 8.1.4. Notable policy considerations Take the following policy considerations into account: Both policy_instance_percentage and policy_instance_minimum set minimum allocations. The rule that results in more instances assigned to the group takes effect. For example, if you have a policy_instance_percentage of 50% and a policy_instance_minimum of 2 and you start 6 instances, 3 of them are assigned to the instance group. If you reduce the number of total instances in the cluster to 2, then both of them are assigned to the instance group to satisfy policy_instance_minimum . This enables you to set a lower limit on the amount of available resources. Policies do not actively prevent instances from being associated with multiple instance groups, but this can be achieved by making the percentages add up to 100. If you have 4 instance groups, assign each a percentage value of 25 and the instances are distributed among them without any overlap. 8.1.5. Pinning instances manually to specific groups If you have a special instance which needs to be exclusively assigned to a specific instance group but do not want it to automatically join other groups by "percentage" or "minimum" policies: Procedure Add the instance to one or more instance groups' policy_instance_list . Update the instance's managed_by_policy property to be False . This prevents the instance from being automatically added to other groups based on percentage and minimum policy. It only belongs to the groups you have manually assigned it to: HTTP PATCH /api/v2/instance_groups/N/ { "policy_instance_list": ["special-instance"] } HTTP PATCH /api/v2/instances/X/ { "managed_by_policy": False } 8.1.6. Job runtime behavior When you run a job associated with an instance group, note the following behaviors: If you divide a cluster into separate instance groups, then the behavior is similar to the cluster as a whole. If you assign two instances to a group then either one is as likely to receive a job as any other in the same group. As automation controller instances are brought online, it effectively expands the work capacity of the system. If you place those instances into instance groups, then they also expand that group's capacity. If an instance is performing work and it is a member of multiple groups, then capacity is reduced from all groups for which it is a member. De-provisioning an instance removes capacity from the cluster wherever that instance was assigned. For more information, see the Deprovisioning instance groups section for more detail. Note Not all instances are required to be provisioned with an equal capacity. 8.1.7. Control where a job runs If you associate instance groups with a job template, inventory, or organization, a job run from that job template is not eligible for the default behavior. This means that if all of the instances inside of the instance groups associated with these three resources are out of capacity, the job remains in the pending state until capacity becomes available. The order of preference in determining which instance group to submit the job to is as follows: Job template Inventory Organization (by way of project) If you associate instance groups with the job template, and all of these are at capacity, then the job is submitted to instance groups specified on the inventory, and then the organization. Jobs must execute in those groups in preferential order as resources are available. You can still associate the global default group with a resource, like any of the custom instance groups defined in the playbook. You can use this to specify a preferred instance group on the job template or inventory, but still enable the job to be submitted to any instance if those are out of capacity. Examples If you associate group_a with a job template and also associate the default group with its inventory, you enable the default group to be used as a fallback in case group_a gets out of capacity. In addition, it is possible to not associate an instance group with one resource but designate another resource as the fallback. For example, not associating an instance group with a job template and having it fall back to the inventory or the organization's instance group. This presents the following two examples: Associating instance groups with an inventory (omitting assigning the job template to an instance group) ensures that any playbook run against a specific inventory runs only on the group associated with it. This is useful in the situation where only those instances have a direct link to the managed nodes. An administrator can assign instance groups to organizations. This enables the administrator to segment out the entire infrastructure and guarantee that each organization has capacity to run jobs without interfering with any other organization's ability to run jobs. An administrator can assign multiple groups to each organization, similar to the following scenario: There are three instance groups: A , B , and C . There are two organizations: Org1 and Org2 . The administrator assigns group A to Org1 , group B to Org2 and then assigns group C to both Org1 and Org2 as an overflow for any extra capacity that may be needed. The organization administrators are then free to assign inventory or job templates to whichever group they want, or let them inherit the default order from the organization. Arranging resources this way offers you flexibility. You can also create instance groups with only one instance, enabling you to direct work towards a very specific Host in the automation controller cluster. 8.1.8. Instance group capacity limits There is external business logic that can drive the need to limit the concurrency of jobs sent to an instance group, or the maximum number of forks to be consumed. For traditional instances and instance groups, you might want to enable two organizations to run jobs on the same underlying instances, but limit each organization's total number of concurrent jobs. This can be achieved by creating an instance group for each organization and assigning the value for max_concurrent_jobs . For automation controller groups, automation controller is generally not aware of the resource limits of the OpenShift cluster. You can set limits on the number of pods on a namespace, or only resources available to schedule a certain number of pods at a time if no auto-scaling is in place. In this case, you can adjust the value for max_concurrent_jobs . Another parameter available is max_forks . This provides additional flexibility for capping the capacity consumed on an instance group or container group. You can use this if jobs with a wide variety of inventory sizes and "forks" values are being run. This enables you to limit an organization to run up to 10 jobs concurrently, but consume no more than 50 forks at a time: max_concurrent_jobs: 10 max_forks: 50 If 10 jobs that use 5 forks each are run, an eleventh job waits until one of these finishes to run on that group (or be scheduled on a different group with capacity). If 2 jobs are running with 20 forks each, then a third job with a task_impact of 11 or more waits until one of these finishes to run on that group (or be scheduled on a different group with capacity). For container groups, using the max_forks value is useful given that all jobs are submitted using the same pod_spec with the same resource requests, irrespective of the "forks" value of the job. The default pod_spec sets requests and not limits, so the pods can "burst" above their requested value without being throttled or reaped. By setting the max_forks value , you can help prevent a scenario where too many jobs with large forks values get scheduled concurrently and cause the OpenShift nodes to be oversubscribed with multiple pods using more resources than their requested value. To set the maximum values for the concurrent jobs and forks in an instance group, see Creating an instance group in the Automation controller User Guide . 8.1.9. Deprovisioning instance groups Re-running the setup playbook does not deprovision instances since clusters do not currently distinguish between an instance that you took offline intentionally or due to failure. Instead, shut down all services on the automation controller instance and then run the deprovisioning tool from any other instance. Procedure Shut down the instance or stop the service with the following command: automation-controller-service stop Run the following deprovision command from another instance to remove it from the controller cluster registry: awx-manage deprovision_instance --hostname=<name used in inventory file> Example Deprovisioning instance groups in automation controller does not automatically deprovision or remove instance groups, even though re-provisioning often causes these to be unused. They can still show up in API endpoints and stats monitoring. You can remove these groups with the following command: awx-manage unregister_queue --queuename=<name> Removing an instance's membership from an instance group in the inventory file and re-running the setup playbook does not ensure that the instance is not added back to a group. To be sure that an instance is not added back to a group, remove it through the API and also remove it in your inventory file. You can also stop defining instance groups in the inventory file. You can manage instance group topology through the automation controller UI. For more information about managing instance groups in the UI, see Managing Instance Groups in the Automation controller User Guide . Note If you have isolated instance groups created in older versions of automation controller (3.8.x and earlier) and want to migrate them to execution nodes to make them compatible for use with the automation mesh architecture, see Migrate isolated instances to execution nodes in the Ansible Automation Platform Upgrade and Migration Guide . 8.2. Container groups Ansible Automation Platform supports container groups, which enable you to execute jobs in automation controller regardless of whether automation controller is installed as a standalone, in a virtual environment, or in a container. Container groups act as a pool of resources within a virtual environment. You can create instance groups to point to an OpenShift container, which are job environments that are provisioned on-demand as a pod that exists only for the duration of the playbook run. This is known as the ephemeral execution model and ensures a clean environment for every job run. In some cases, you might want to set container groups to be "always-on", which you can configure through the creation of an instance. Note Container groups upgraded from versions prior to automation controller 4.0 revert back to default and remove the old pod definition, clearing out all custom pod definitions in the migration. Container groups are different from execution environments in that execution environments are container images and do not use a virtual environment. For more information, see Execution environments in the Automation controller User Guide . 8.2.1. Creating a container group A ContainerGroup is a type of InstanceGroup that has an associated credential that enables you to connect to an OpenShift cluster. Prerequisites A namespace that you can launch into. Every cluster has a "default" namespace, but you can use a specific namespace. A service account that has the roles that enable it to launch and manage pods in this namespace. If you are using execution environments in a private registry, and have a container registry credential associated with them in automation controller, the service account also needs the roles to get, create, and delete secrets in the namespace. If you do not want to give these roles to the service account, you can pre-create the ImagePullSecrets and specify them on the pod spec for the ContainerGroup . In this case, the execution environment must not have a container registry credential associated, or automation controller attempts to create the secret for you in the namespace. A token associated with that service account. An OpenShift or Kubernetes Bearer Token. A CA certificate associated with the cluster. The following procedure explains how to create a service account in an OpenShift cluster or Kubernetes, to be used to run jobs in a container group through automation controller. After the service account is created, its credentials are provided to automation controller in the form of an OpenShift or Kubernetes API Bearer Token credential. Procedure To create a service account, download and use the sample service account, containergroup sa and change it as needed to obtain the credentials: --- apiVersion: v1 kind: ServiceAccount metadata: name: containergroup-service-account namespace: containergroup-namespace --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account namespace: containergroup-namespace rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get"] - apiGroups: [""] resources: ["pods/attach"] verbs: ["get", "list", "watch", "create"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account-binding namespace: containergroup-namespace subjects: - kind: ServiceAccount name: containergroup-service-account namespace: containergroup-namespace roleRef: kind: Role name: role-containergroup-service-account apiGroup: rbac.authorization.k8s.io Apply the configuration from containergroup-sa.yml : oc apply -f containergroup-sa.yml Get the secret name associated with the service account: export SA_SECRET=USD(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '"') Get the token from the secret: oc get secret USD(echo USD{SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token Get the CA certificate: oc get secret USDSA_SECRET -o json | jq '.data["ca.crt"]' | xargs | base64 --decode > containergroup-ca.crt Use the contents of containergroup-sa.token and containergroup-ca.crt to provide the information for the OpenShift or Kubernetes API Bearer Token required for the container group. To create a container group: Procedure Use the automation controller UI to create an OpenShift or Kubernetes API Bearer Token credential to use with your container group. For more information, see Creating a credential in the Automation controller User Guide . From the navigation panel select Administration Instance Groups . Click Add and select Create Container Group . Enter a name for your new container group and select the credential previously created to associate it to the container group. 8.2.2. Customizing the pod specification Ansible Automation Platform provides a simple default pod specification, however, you can provide a custom YAML or JSON document that overrides the default pod specification. This field uses any custom fields such as ImagePullSecrets , that can be "serialized" as valid pod JSON or YAML. A full list of options can be found in the Pods and Services section of the OpenShift documentation. Procedure To customize the pod specification, specify the namespace in the Pod Spec Override field by using the toggle to enable and expand the Pod Spec Override field. Click Save . You can provide additional customizations, if needed. Click Expand to view the entire customization window: Note The image when a job launches is determined by which execution environment is associated with the job. If you associate a container registry credential with the execution environment, then automation controller attempts to make an ImagePullSecret to pull the image. If you prefer not to give the service account permission to manage secrets, you must pre-create the ImagePullSecret and specify it on the pod specification, and omit any credential from the execution environment used. For more information, see the Allowing Pods to Reference Images from Other Secured Registries section of the Red Hat Container Registry Authentication article. Once you have created the container group successfully, the Details tab of the newly created container group remains, which enables you to review and edit your container group information. This is the same menu that is opened if you click the icon from the Instance Group link. You can also edit Instances and review Jobs associated with this instance group. Container groups and instance groups are labeled accordingly. 8.2.3. Verifying container group functions To verify the deployment and termination of your container: Procedure Create a mock inventory and associate the container group to it by populating the name of the container group in the Instance Group field. For more information, see Add a new inventory in the Automation controller User Guide . Create the localhost host in the inventory with variables: {'ansible_host': '127.0.0.1', 'ansible_connection': 'local'} Launch an ad hoc job against the localhost using the ping or setup module. Even though the Machine Credential field is required, it does not matter which one is selected for this test: You can see in the Jobs details view that the container was reached successfully using one of the ad hoc jobs. If you have an OpenShift UI, you can see pods appear and disappear as they deploy and terminate. Alternatively, you can use the CLI to perform a get pod operation on your namespace to watch these same events occurring in real-time. 8.2.4. View container group jobs When you run a job associated with a container group, you can see the details of that job in the Details view along with its associated container group and the execution environment that spun up. 8.2.5. Kubernetes API failure conditions When running a container group and the Kubernetes API responds that the resource quota has been exceeded, automation controller keeps the job in pending state. Other failures result in the traceback of the Error Details field showing the failure reason, similar to the following example: Error creating pod: pods is forbidden: User "system: serviceaccount: aap:example" cannot create resource "pods" in API group "" in the namespace "aap" 8.2.6. Container capacity limits Capacity limits and quotas for containers are defined by objects in the Kubernetes API: To set limits on all pods within a given namespace, use the LimitRange object. For more information see the Quotas and Limit Ranges section of the OpenShift documentation. To set limits directly on the pod definition launched by automation controller, see Customizing the pod specification and the Compute Resources section of the OpenShift documentation. Note Container groups do not use the capacity algorithm that normal nodes use. You need to set the number of forks at the job template level. If you configure forks in automation controller, that setting is passed along to the container. | [
"[automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control [automationcontroller:vars] peers=execution_nodes [execution_nodes] [instance_group_test] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928",
"TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] *** fatal: [126-addr.tatu.home -> localhost]: FAILED! => {\"msg\": \"The host '110-addr.tatu.home' is not present in either [automationcontroller] or [execution_nodes]\"}",
"[automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control [automationcontroller:vars] peers=execution_nodes [execution_nodes] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928 [instance_group_test] 110-addr.tatu.home",
"TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] *** ok: [126-addr.tatu.home -> localhost] => {\"changed\": false, \"mesh\": {\"110-addr.tatu.home\": {\"node_type\": \"execution\", \"peers\": [], \"receptor_control_filename\": \"receptor.sock\", \"receptor_control_service_name\": \"control\", \"receptor_listener\": true, \"receptor_listener_port\": 8928, \"receptor_listener_protocol\": \"tcp\", \"receptor_log_level\": \"info\"}, \"126-addr.tatu.home\": {\"node_type\": \"control\", \"peers\": [\"110-addr.tatu.home\"], \"receptor_control_filename\": \"receptor.sock\", \"receptor_control_service_name\": \"control\", \"receptor_listener\": false, \"receptor_listener_port\": 27199, \"receptor_listener_protocol\": \"tcp\", \"receptor_log_level\": \"info\"}}}",
"HTTP POST /api/v2/instance_groups/x/instances/ {'id': y}`",
"HTTP PATCH /api/v2/instance_groups/N/ { \"policy_instance_list\": [\"special-instance\"] } HTTP PATCH /api/v2/instances/X/ { \"managed_by_policy\": False }",
"max_concurrent_jobs: 10 max_forks: 50",
"automation-controller-service stop",
"awx-manage deprovision_instance --hostname=<name used in inventory file>",
"awx-manage deprovision_instance --hostname=hostB",
"awx-manage unregister_queue --queuename=<name>",
"--- apiVersion: v1 kind: ServiceAccount metadata: name: containergroup-service-account namespace: containergroup-namespace --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account namespace: containergroup-namespace rules: - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] resources: [\"pods/log\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods/attach\"] verbs: [\"get\", \"list\", \"watch\", \"create\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account-binding namespace: containergroup-namespace subjects: - kind: ServiceAccount name: containergroup-service-account namespace: containergroup-namespace roleRef: kind: Role name: role-containergroup-service-account apiGroup: rbac.authorization.k8s.io",
"apply -f containergroup-sa.yml",
"export SA_SECRET=USD(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '\"')",
"get secret USD(echo USD{SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token",
"get secret USDSA_SECRET -o json | jq '.data[\"ca.crt\"]' | xargs | base64 --decode > containergroup-ca.crt",
"{'ansible_host': '127.0.0.1', 'ansible_connection': 'local'}",
"Error creating pod: pods is forbidden: User \"system: serviceaccount: aap:example\" cannot create resource \"pods\" in API group \"\" in the namespace \"aap\""
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/controller-instance-and-container-groups |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Red Hat OpenShift Data Foundation 4.12 supports deployment of Red Hat OpenShift on IBM Cloud clusters in connected environments. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_cloud/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 12. Working with certmonger | Chapter 12. Working with certmonger Part of managing machine authentication is managing machine certificates. The certmonger service manages certificate life cycle for applications and, if properly configured, can work together with a certificate authority (CA) to renew certificates. The certmonger daemon and its command-line clients simplify the process of generating public/private key pairs, creating certificate requests, and submitting requests to the CA for signing. The certmonger daemon monitors certificates for expiration and can renew certificates that are about to expire. The certificates that certmonger monitors are tracked in files stored in a configurable directory. The default location is /var/lib/certmonger/requests . Note The certmonger daemon cannot revoke certificates. A certificate can only be revoked by a relevant Certificate Authority, which needs to invalidate the certificate and update its Certificate Revocation List. 12.1. certmonger and Certificate Authorities By default, certmonger can automatically obtain three kinds of certificates that differ in what authority source the certificate employs: Self-signed certificate Generating a self-signed certificate does not involve any CA, because each certificate is signed using the certificate's own key. The software that is verifying a self-signed certificate needs to be instructed to trust that certificates directly in order to verify it. To obtain a self-signed certificate, run the selfsign-getcert command. Certificate from the Dogtag Certificate System CA as part of Red Hat Enterprise Linux IdM To obtain a certificate using an IdM server, run the ipa-getcert command Certificate signed by a local CA present on the system The software that is verifying a certificate signed by a local signer needs to be instructed to trust certificates from this local signer in order to verify them. To obtain a locally-signed certificate, run the local-getcert command. Other CAs can also use certmonger to manage certificates, but support must be added to certmonger by creating special CA helpers . For more information on how to create CA helpers, see the certmonger project documentation at https://pagure.io/certmonger/blob/master/f/doc/submit.txt . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/certmongerx |
Chapter 3. Setting up Maven locally | Chapter 3. Setting up Maven locally Typical Red Hat build of Apache Camel application development uses Maven to build and manage projects. 3.1. Preparing to set up Maven Maven is a free, open source, build tool from Apache. Typically, you use Maven to build Fuse applications. Procedure Download Maven 3.8.6 or later from the Maven download page . Tip To verify that you have the correct Maven and JDK version installed, open a command terminal and enter the following command: Check the output to verify that Maven is version 3.8.6 or newer, and is using OpenJDK 17. Ensure that your system is connected to the Internet. While building a project, the default behavior is that Maven searches external repositories and downloads the required artifacts. Maven looks for repositories that are accessible over the Internet. You can change this behavior so that Maven searches only repositories that are on a local network. That is, Maven can run in an offline mode. In offline mode, Maven looks for artifacts in its local repository. See Section 3.3, "Using local Maven repositories" . 3.2. Adding Red Hat repositories to Maven To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven's settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user's home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml . Prerequisite You know the location of the settings.xml file in which you want to add the Red Hat repositories. Procedure In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example: Note If you are using the camel-jira component, also add the atlassian repository. <?xml version="1.0"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>atlassian</id> <url>https://packages.atlassian.com/maven-external/</url> <name>atlassian external repo</name> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings> 3.3. Using local Maven repositories If you are running a container without an Internet connection, and you need to deploy an application that has dependencies that are not available offline, you can use the Maven dependency plug-in to download the application's dependencies into a Maven offline repository. You can then distribute this customized Maven offline repository to machines that do not have an Internet connection. Procedure In the project directory that contains the pom.xml file, download a repository for a Maven project by running a command such as the following: In this example, Maven dependencies and plug-ins that are required to build the project are downloaded to the /tmp/my-project directory. Distribute this customized Maven offline repository internally to any machines that do not have an Internet connection. 3.4. Setting Maven mirror using environmental variables or system properties When running the applications you need access to the artifacts that are in the Red Hat Maven repositories. These repositories are added to Maven's settings.xml file. Maven checks the following locations for settings.xml file: looks for the specified url if not found looks for USD{user.home}/.m2/settings.xml if not found looks for USD{maven.home}/conf/settings.xml if not found looks for USD{M2_HOME}/conf/settings.xml if no location is found, empty org.apache.maven.settings.Settings instance is created. 3.4.1. About Maven mirror Maven uses a set of remote repositories to access the artifacts, which are currently not available in local repository. The list of repositories almost always contains Maven Central repository, but for Red Hat Fuse, it also contains Maven Red Hat repositories. In some cases where it is not possible or allowed to access different remote repositories, you can use a mechanism of Maven mirrors. A mirror replaces a particular repository URL with a different one, so all HTTP traffic when remote artifacts are being searched for can be directed to a single URL. 3.4.2. Adding Maven mirror to settings.xml To set the Maven mirror, add the following section to Maven's settings.xml : No mirror is used if the above section is not found in the settings.xml file. To specify a global mirror without providing the XML configuration, you can use either system property or environmental variables. 3.4.3. Setting Maven mirror using environmental variable or system property To set the Maven mirror using either environmental variable or system property, you can add: Environmental variable called MAVEN_MIRROR_URL to bin/setenv file System property called mavenMirrorUrl to etc/system.properties file 3.4.4. Using Maven options to specify Maven mirror url To use an alternate Maven mirror url, other than the one specified by environmental variables or system property, use the following maven options when running the application: -DmavenMirrorUrl=mirrorId::mirrorUrl for example, -DmavenMirrorUrl=my-mirror::http://mirror.net/repository -DmavenMirrorUrl=mirrorUrl for example, -DmavenMirrorUrl=http://mirror.net/repository . In this example, the <id> of the <mirror> is just a mirror. 3.5. About Maven artifacts and coordinates In the Maven build system, the basic building block is an artifact . After a build, the output of an artifact is typically an archive, such as a JAR or WAR file. A key aspect of Maven is the ability to locate artifacts and manage the dependencies between them. A Maven coordinate is a set of values that identifies the location of a particular artifact. A basic coordinate has three values in the following form: groupId:artifactId:version Sometimes Maven augments a basic coordinate with a packaging value or with both a packaging value and a classifier value. A Maven coordinate can have any one of the following forms: Here are descriptions of the values: groupdId Defines a scope for the name of the artifact. You would typically use all or part of a package name as a group ID. For example, org.fusesource.example . artifactId Defines the artifact name relative to the group ID. version Specifies the artifact's version. A version number can have up to four parts: n.n.n.n , where the last part of the version number can contain non-numeric characters. For example, the last part of 1.0-SNAPSHOT is the alphanumeric substring, 0-SNAPSHOT . packaging Defines the packaged entity that is produced when you build the project. For OSGi projects, the packaging is bundle . The default value is jar . classifier Enables you to distinguish between artifacts that were built from the same POM, but have different content. Elements in an artifact's POM file define the artifact's group ID, artifact ID, packaging, and version, as shown here: <project ... > ... <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> ... </project> To define a dependency on the preceding artifact, you would add the following dependency element to a POM file: <project ... > ... <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> ... </project> Note It is not necessary to specify the bundle package type in the preceding dependency, because a bundle is just a particular kind of JAR file and jar is the default Maven package type. If you do need to specify the packaging type explicitly in a dependency, however, you can use the type element. | [
"mvn --version",
"<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>atlassian</id> <url>https://packages.atlassian.com/maven-external/</url> <name>atlassian external repo</name> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>",
"mvn org.apache.maven.plugins:maven-dependency-plugin:3.1.0:go-offline -Dmaven.repo.local=/tmp/my-project",
"<mirror> <id>all</id> <mirrorOf>*</mirrorOf> <url>http://host:port/path</url> </mirror>",
"groupId:artifactId:version groupId:artifactId:packaging:version groupId:artifactId:packaging:classifier:version",
"<project ... > <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> </project>",
"<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/getting_started_with_red_hat_build_of_apache_camel_for_quarkus/set-up-maven-locally |
Preface | Preface Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/pr01 |
23.5. CPU tuning | 23.5. CPU tuning <domain> ... <cputune> <vcpupin vcpu="0" cpuset="1-4,^2"/> <vcpupin vcpu="1" cpuset="0,1"/> <vcpupin vcpu="2" cpuset="2,3"/> <vcpupin vcpu="3" cpuset="0,4"/> <emulatorpin cpuset="1-3"/> <shares>2048</shares> <period>1000000</period> <quota>-1</quota> <emulator_period>1000000</emulator_period> <emulator_quota>-1</emulator_quota> </cputune> ... </domain> Figure 23.7. CPU Tuning Although all are optional, the components of this section of the domain XML are as follows: Table 23.4. CPU tuning elements Element Description <cputune> Provides details regarding the CPU tunable parameters for the domain. This is optional. <vcpupin> Specifies which of host physical machine's physical CPUs the domain vCPU will be pinned to. If this is omitted, and the cpuset attribute of the <vcpu> element is not specified, the vCPU is pinned to all the physical CPUs by default. It contains two required attributes: the <vcpu> attribute specifies id , and the cpuset attribute is same as the cpuset attribute in the <vcpu> element. <emulatorpin> Specifies which of the host physical machine CPUs the "emulator" (a subset of a domains not including <vcpu> ) will be pinned to. If this is omitted, and the cpuset attribute in the <vcpu> element is not specified, the "emulator" is pinned to all the physical CPUs by default. It contains one required cpuset attribute specifying which physical CPUs to pin to. emulatorpin is not allowed if the placement attribute in the <vcpu> element is set as auto . <shares> Specifies the proportional weighted share for the domain. If this is omitted, it defaults to the operating system provided defaults. If there is no unit for the value, it is calculated relative to the setting of the other guest virtual machine. For example, a guest virtual machine configured with a <shares> value of 2048 will get twice as much CPU time as a guest virtual machine configured with a <shares> value of 1024. <period> Specifies the enforcement interval in microseconds. By using <period> , each of the domain's vCPUs will not be allowed to consume more than its allotted quota worth of run time. This value should be within the following range: 1000-1000000 . A <period> with a value of 0 means no value. <quota> Specifies the maximum allowed bandwidth in microseconds. A domain with <quota> as any negative value indicates that the domain has infinite bandwidth, which means that it is not bandwidth controlled. The value should be within the following range: 1000 - 18446744073709551 or less than 0 . A quota with value of 0 means no value. You can use this feature to ensure that all vCPUs run at the same speed. <emulator_period> Specifies the enforcement interval in microseconds. Within an <emulator_period> , emulator threads (those excluding vCPUs) of the domain will not be allowed to consume more than the <emulator_quota> worth of run time. The <emulator_period> value should be in the following range: 1000 - 1000000 . An <emulator_period> with value of 0 means no value. <emulator_quota> Specifies the maximum allowed bandwidth in microseconds for the domain's emulator threads (those excluding vCPUs). A domain with an <emulator_quota> as a negative value indicates that the domain has infinite bandwidth for emulator threads (those excluding vCPUs), which means that it is not bandwidth controlled. The value should be in the following range: 1000 - 18446744073709551 , or less than 0 . An <emulator_quota> with value 0 means no value. | [
"<domain> <cputune> <vcpupin vcpu=\"0\" cpuset=\"1-4,^2\"/> <vcpupin vcpu=\"1\" cpuset=\"0,1\"/> <vcpupin vcpu=\"2\" cpuset=\"2,3\"/> <vcpupin vcpu=\"3\" cpuset=\"0,4\"/> <emulatorpin cpuset=\"1-3\"/> <shares>2048</shares> <period>1000000</period> <quota>-1</quota> <emulator_period>1000000</emulator_period> <emulator_quota>-1</emulator_quota> </cputune> </domain>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-cpu_tuning |
Chapter 7. Templates and Pools | Chapter 7. Templates and Pools 7.1. Templates and Pools The Red Hat Virtualization environment provides administrators with tools to simplify the provisioning of virtual machines to users. These are templates and pools. A template is a shortcut that allows an administrator to quickly create a new virtual machine based on an existing, pre-configured virtual machine, bypassing operating system installation and configuration. This is especially helpful for virtual machines that will be used like appliances, for example web server virtual machines. If an organization uses many instances of a particular web server, an administrator can create a virtual machine that will be used as a template, installing an operating system, the web server, any supporting packages, and applying unique configuration changes. The administrator can then create a template based on the working virtual machine that will be used to create new, identical virtual machines as they are required. Virtual machine pools are groups of virtual machines based on a given template that can be rapidly provisioned to users. Permission to use virtual machines in a pool is granted at the pool level; a user who is granted permission to use the pool will be assigned any virtual machine from the pool. Inherent in a virtual machine pool is the transitory nature of the virtual machines within it. Because users are assigned virtual machines without regard for which virtual machine in the pool they have used in the past, pools are not suited for purposes which require data persistence. Virtual machine pools are best suited for scenarios where either user data is stored in a central location and the virtual machine is a means to accessing and using that data, or data persistence is not important. The creation of a pool results in the creation of the virtual machines that populate the pool, in a stopped state. These are then started on user request. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/chap-templates_and_pools |
Chapter 4. Installing and configuring automation hub on Red Hat OpenShift Container Platform web console | Chapter 4. Installing and configuring automation hub on Red Hat OpenShift Container Platform web console You can use these instructions to install the automation hub operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database. Automation hub configuration can be done through the automation hub pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification. Note When an instance of automation hub is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation hub instance in the same namespace. See Finding and deleting PVCs for more information. 4.1. Prerequisites You have installed the Ansible Automation Platform Operator in Operator Hub. 4.2. Installing the automation hub operator Use this procedure to install the automation hub operator. Procedure Navigate to Operators Installed Operators . Locate the Automation hub entry, then click Create instance . Click Form view . Enter the name of the new controller. Optional: Add any labels necessary. Click Advanced configuration . From the PostgreSQL container storage requirements drop down menu, select requests : Enter "100Gi" in the storage field. From the PostgreSQL container resource requirements drop down menu, select requests : Enter "200" in the cpu field. Enter "512Mi" in the memory field. 4.2.1. Storage options for Ansible Automation Platform Operator installation on Red Hat OpenShift Container Platform Automation hub requires ReadWriteMany file-based storage, Azure Blob storage, or Amazon S3-compliant storage for operation so that multiple pods can access shared content, such as collections. The process for configuring object storage on the AutomationHub CR is similar for Amazon S3 and Azure Blob Storage. If you are using file-based storage and your installation scenario includes automation hub, ensure that the storage option for Ansible Automation Platform Operator is set to ReadWriteMany . ReadWriteMany is the default storage option. In addition, OpenShift Data Foundation provides a ReadWriteMany or S3-compliant implementation. Also, you can set up NFS storage configuration to support ReadWriteMany . This, however, introduces the NFS server as a potential, single point of failure. Additional resources Persistent storage using NFS in the OpenShift Container Platform Storage guide IBM's How do I create a storage class for NFS dynamic storage provisioning in an OpenShift environment? 4.2.1.1. Provisioning OCP storage with ReadWriteMany access mode To ensure successful installation of Ansible Automation Platform Operator, you must provision your storage type for automation hub initially to ReadWriteMany access mode. Procedure Click Provisioning to update the access mode. In the first step, update the accessModes from the default ReadWriteOnce to ReadWriteMany . Complete the additional steps in this section to create the persistent volume claim (PVC). 4.2.1.2. Configuring object storage on Amazon S3 Red Hat supports Amazon Simple Storage Service (S3) for automation hub. You can configure it when deploying the AutomationHub custom resource (CR), or you can configure it for an existing instance. Prerequisites Create an Amazon S3 bucket to store the objects. Note the name of the S3 bucket. Procedure Create a Kubernetes secret containing the AWS credentials and connection details, and the name of your Amazon S3 bucket. The following example creates a secret called test-s3 : USD oc -n USDHUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-s3' stringData: s3-access-key-id: USDS3_ACCESS_KEY_ID s3-secret-access-key: USDS3_SECRET_ACCESS_KEY s3-bucket-name: USDS3_BUCKET_NAME s3-region: USDS3_REGION EOF Add the secret to the automation hub custom resource (CR) spec : spec: object_storage_s3_secret: test-s3 If you are applying this secret to an existing instance, restart the API pods for the change to take effect. <hub-name> is the name of your hub instance. USD oc -n USDHUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api 4.2.1.3. Configuring object storage on Azure Blob Red Hat supports Azure Blob Storage for automation hub. You can configure it when deploying the AutomationHub custom resource (CR), or you can configure it for an existing instance. Prerequisites Create an Azure Storage blob container to store the objects. Note the name of the blob container. Procedure Create a Kubernetes secret containing the credentials and connection details for your Azure account, and the name of your Azure Storage blob container. The following example creates a secret called test-azure : USD oc -n USDHUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-azure' stringData: azure-account-name: USDAZURE_ACCOUNT_NAME azure-account-key: USDAZURE_ACCOUNT_KEY azure-container: USDAZURE_CONTAINER azure-container-path: USDAZURE_CONTAINER_PATH azure-connection-string: USDAZURE_CONNECTION_STRING EOF Add the secret to the automation hub custom resource (CR) spec : spec: object_storage_azure_secret: test-azure If you are applying this secret to an existing instance, restart the API pods for the change to take effect. <hub-name> is the name of your hub instance. USD oc -n USDHUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api 4.2.2. Configure your automation hub operator route options The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator route options under Advanced configuration . Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Hub tab. For new instances, click Create AutomationHub . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AutomationHub . Click Advanced configuration . Under Ingress type , click the drop-down menu and select Route . Under Route DNS host , enter a common host name that the route answers to. Under Route TLS termination mechanism , click the drop-down menu and select Edge or Passthrough . Under Route TLS credential secret , click the drop-down menu and select a secret from the list. 4.2.3. Configuring the Ingress type for your automation hub operator The Ansible Automation Platform Operator installation form allows you to further configure your automation hub operator ingress under Advanced configuration . Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Hub tab. For new instances, click Create AutomationHub . For existing instances, you can edit the YAML view by clicking the ... icon and then Edit AutomationHub . Click Advanced Configuration . Under Ingress type , click the drop-down menu and select Ingress . Under Ingress annotations , enter any annotations to add to the ingress. Under Ingress TLS secret , click the drop-down menu and select a secret from the list. After you have configured your automation hub operator, click Create at the bottom of the form view. Red Hat OpenShift Container Platform will now create the pods. This may take a few minutes. You can view the progress by navigating to Workloads Pods and locating the newly created instance. Verification Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation hub are running: Operator manager controllers automation controller automation hub The operator manager controllers for each of the 3 operators, include the following: automation-controller-operator-controller-manager automation-hub-operator-controller-manager resource-operator-controller-manager After deploying automation controller, you will see the addition of these pods: controller controller-postgres After deploying automation hub, you will see the addition of these pods: hub-api hub-content hub-postgres hub-redis hub-worker Note A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod. 4.3. Configuring LDAP authentication for Ansible automation hub on OpenShift Container Platform Configure LDAP authentication settings for Ansible Automation Platform on OpenShift Container Platform in the spec section of your Hub instance configuration file. Procedure Use the following example to configure LDAP in your automation hub instance. For any blank fields, enter `` . spec: pulp_settings: auth_ldap_user_attr_map: email: "mail" first_name: "givenName" last_name: "sn" auth_ldap_group_search_base_dn: 'cn=groups,cn=accounts,dc=example,dc=com' auth_ldap_bind_dn: ' ' auth_ldap_bind_password: ' ' auth_ldap_group_search_filter: (objectClass=posixGroup) auth_ldap_user_search_scope: SUBTREE auth_ldap_server_uri: 'ldap://ldapserver:389' authentication_backend_preset: ldap auth_ldap_mirror_groups: 'True' auth_ldap_user_search_base_dn: 'cn=users,cn=accounts,dc=example,dc=com' auth_ldap_bind_password: 'ldappassword' auth_ldap_user_search_filter: (uid=%(user)s) auth_ldap_group_search_scope: SUBTREE auth_ldap_user_flags_by_group: '@json {"is_superuser": "cn=tower-admin,cn=groups,cn=accounts,dc=example,dc=com"}' Note Do not leave any fields empty. For fields with no variable, enter `` to indicate a default value. 4.4. Accessing the automation hub user interface You can access the automation hub interface once all pods have successfully launched. Procedure Navigate to Networking Routes . Under Location , click on the URL for your automation hub instance. The automation hub user interface launches where you can sign in with the administrator credentials specified during the operator configuration process. Note If you did not specify an administrator password during configuration, one was automatically created for you. To locate this password, go to your project, select Workloads Secrets and open controller-admin-password. From there you can copy the password and paste it into the Automation hub password field. 4.5. Configuring an external database for automation hub on Red Hat Ansible Automation Platform Operator For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command. By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks. Note The same external database (PostgreSQL instance) can be used for both automation hub and automation controller as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. The following section outlines the steps to configure an external database for your automation hub on a Ansible Automation Platform Operator. Prerequisite The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. Note Ansible Automation Platform 2.4 supports PostgreSQL 13. Procedure The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation hub spec. Create a postgres_configuration_secret .yaml file, following the template below: apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque 1 Namespace to create the secret in. This should be the same namespace you want to deploy to. 2 The resolvable hostname for your database node. 3 External port defaults to 5432 . 4 Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. 5 The variable sslmode is valid for external databases only. The allowed values are: prefer , disable , allow , require , verify-ca , and verify-full . Apply external-postgres-configuration-secret.yml to your cluster using the oc create command. USD oc create -f external-postgres-configuration-secret.yml When creating your AutomationHub custom resource object, specify the secret on your spec, following the example below: apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: hub-dev spec: postgres_configuration_secret: external-postgres-configuration 4.5.1. Enabling the hstore extension for the automation hub PostgreSQL database From Ansible Automation Platform 2.4, the database migration script uses hstore fields to store information, therefore the hstore extension to the automation hub PostgreSQL database must be enabled. This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server. If the PostgreSQL database is external, you must enable the hstore extension to the automation hub PostreSQL database manually before automation hub installation. If the hstore extension is not enabled before automation hub installation, a failure is raised during database migration. Procedure Check if the extension is available on the PostgreSQL server (automation hub database). USD psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'" Where the default value for <automation hub database> is automationhub . Example output with hstore available : name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row) Example output with hstore not available : name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows) On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package. To install the RPM package, use the following command: dnf install postgresql-contrib Create the hstore PostgreSQL extension on the automation hub database with the following command: USD psql -d <automation hub database> -c "CREATE EXTENSION hstore;" The output of which is: CREATE EXTENSION In the following output, the installed_version field contains the hstore extension used, indicating that hstore is enabled. name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row) 4.6. Finding and deleting PVCs A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them. Procedure List the existing PVCs in your deployment namespace: oc get pvc -n <namespace> Identify the PVC associated with your deployment by comparing the old deployment name and the PVC name. Delete the old PVC: oc delete pvc -n <namespace> <pvc-name> 4.7. Additional configurations A collection download count can help you understand collection usage. To add a collection download count to automation hub, set the following configuration: spec: pulp_settings: ansible_collect_download_count: true When ansible_collect_download_count is enabled, automation hub will display a download count by the collection. 4.8. Additional resources For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide. | [
"oc -n USDHUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-s3' stringData: s3-access-key-id: USDS3_ACCESS_KEY_ID s3-secret-access-key: USDS3_SECRET_ACCESS_KEY s3-bucket-name: USDS3_BUCKET_NAME s3-region: USDS3_REGION EOF",
"spec: object_storage_s3_secret: test-s3",
"oc -n USDHUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api",
"oc -n USDHUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-azure' stringData: azure-account-name: USDAZURE_ACCOUNT_NAME azure-account-key: USDAZURE_ACCOUNT_KEY azure-container: USDAZURE_CONTAINER azure-container-path: USDAZURE_CONTAINER_PATH azure-connection-string: USDAZURE_CONNECTION_STRING EOF",
"spec: object_storage_azure_secret: test-azure",
"oc -n USDHUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api",
"spec: pulp_settings: auth_ldap_user_attr_map: email: \"mail\" first_name: \"givenName\" last_name: \"sn\" auth_ldap_group_search_base_dn: 'cn=groups,cn=accounts,dc=example,dc=com' auth_ldap_bind_dn: ' ' auth_ldap_bind_password: ' ' auth_ldap_group_search_filter: (objectClass=posixGroup) auth_ldap_user_search_scope: SUBTREE auth_ldap_server_uri: 'ldap://ldapserver:389' authentication_backend_preset: ldap auth_ldap_mirror_groups: 'True' auth_ldap_user_search_base_dn: 'cn=users,cn=accounts,dc=example,dc=com' auth_ldap_bind_password: 'ldappassword' auth_ldap_user_search_filter: (uid=%(user)s) auth_ldap_group_search_scope: SUBTREE auth_ldap_user_flags_by_group: '@json {\"is_superuser\": \"cn=tower-admin,cn=groups,cn=accounts,dc=example,dc=com\"}'",
"apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 sslmode: \"prefer\" 5 type: \"unmanaged\" type: Opaque",
"oc create -f external-postgres-configuration-secret.yml",
"apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: hub-dev spec: postgres_configuration_secret: external-postgres-configuration",
"psql -d <automation hub database> -c \"SELECT * FROM pg_available_extensions WHERE name='hstore'\"",
"name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)",
"name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)",
"dnf install postgresql-contrib",
"psql -d <automation hub database> -c \"CREATE EXTENSION hstore;\"",
"CREATE EXTENSION",
"name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)",
"get pvc -n <namespace>",
"delete pvc -n <namespace> <pvc-name>",
"spec: pulp_settings: ansible_collect_download_count: true"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/installing-hub-operator |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_hybrid_and_multicloud_resources/providing-feedback-on-red-hat-documentation_rhodf |
Installing on OpenStack | Installing on OpenStack OpenShift Container Platform 4.15 Installing OpenShift Container Platform on OpenStack Red Hat OpenShift Documentation Team | [
"#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack catalog list",
"host=<host_name>",
"port=<port_number>",
"openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName",
"X509v3 Subject Alternative Name: DNS:your.host.example.net",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack network create radio --provider-physical-network radio --provider-network-type flat --external",
"openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external",
"openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio",
"openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"fd2e:6f44:5dd8:c956::/64\" - cidr: \"192.168.25.0/24\" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id",
"[connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto",
"[connection] ipv6.addr-gen-mode=0",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/down-containers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.15/upi/openstack/update-network-resources.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"export OS_NET_ID=\"openshift-USD(dd if=/dev/urandom count=4 bs=1 2>/dev/null |hexdump -e '\"%02x\"')\"",
"echo USDOS_NET_ID",
"echo \"{\\\"os_net_id\\\": \\\"USDOS_NET_ID\\\"}\" | tee netid.json",
"ansible-playbook -i inventory.yaml network.yaml",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c 'import os import sys import yaml import re re_os_net_id = re.compile(r\"{{\\s*os_net_id\\s*}}\") os_net_id = os.getenv(\"OS_NET_ID\") path = \"common.yaml\" facts = None for _dict in yaml.safe_load(open(path))[0][\"tasks\"]: if \"os_network\" in _dict.get(\"set_fact\", {}): facts = _dict[\"set_fact\"] break if not facts: print(\"Cannot find `os_network` in common.yaml file. Make sure OpenStack resource names are defined in one of the tasks.\") sys.exit(1) os_network = re_os_net_id.sub(os_net_id, facts[\"os_network\"]) os_subnet = re_os_net_id.sub(os_net_id, facts[\"os_subnet\"]) path = \"install-config.yaml\" data = yaml.safe_load(open(path)) inventory = yaml.safe_load(open(\"inventory.yaml\"))[\"all\"][\"hosts\"][\"localhost\"] machine_net = [{\"cidr\": inventory[\"os_subnet_range\"]}] api_vips = [inventory[\"os_apiVIP\"]] ingress_vips = [inventory[\"os_ingressVIP\"]] ctrl_plane_port = {\"network\": {\"name\": os_network}, \"fixedIPs\": [{\"subnet\": {\"name\": os_subnet}}]} if inventory.get(\"os_subnet6_range\"): 1 os_subnet6 = re_os_net_id.sub(os_net_id, facts[\"os_subnet6\"]) machine_net.append({\"cidr\": inventory[\"os_subnet6_range\"]}) api_vips.append(inventory[\"os_apiVIP6\"]) ingress_vips.append(inventory[\"os_ingressVIP6\"]) data[\"networking\"][\"networkType\"] = \"OVNKubernetes\" data[\"networking\"][\"clusterNetwork\"].append({\"cidr\": inventory[\"cluster_network6_cidr\"], \"hostPrefix\": inventory[\"cluster_network6_prefix\"]}) data[\"networking\"][\"serviceNetwork\"].append(inventory[\"service_subnet6_range\"]) ctrl_plane_port[\"fixedIPs\"].append({\"subnet\": {\"name\": os_subnet6}}) data[\"networking\"][\"machineNetwork\"] = machine_net data[\"platform\"][\"openstack\"][\"apiVIPs\"] = api_vips data[\"platform\"][\"openstack\"][\"ingressVIPs\"] = ingress_vips data[\"platform\"][\"openstack\"][\"controlPlanePort\"] = ctrl_plane_port del data[\"platform\"][\"openstack\"][\"externalDNS\"] open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml update-network-resources.yaml 1",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"openshift-install --log-level debug wait-for install-complete",
"openstack role add --user <user> --project <project> swiftoperator",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"file <name_of_downloaded_file>",
"openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"openstack port show <cluster_name>-<cluster_ID>-ingress-port",
"openstack floating ip set --port <ingress_port_ID> <apps_FIP>",
"*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>",
"<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload9\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload9\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload10\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload10\"",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload9\",\"type\":\"host-device\",\"device\":\"ens6\" }'",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload10\",\"type\":\"host-device\",\"device\":\"ens5\" }'",
"apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest",
"spec: additionalNetworks: - name: hwoffload1 namespace: cnf rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"hwoffload1\", \"type\": \"host-device\",\"pciBusId\": \"0000:00:05.0\", \"ipam\": {}}' 1 type: Raw",
"oc describe SriovNetworkNodeState -n openshift-sriov-network-operator",
"oc apply -f network.yaml",
"openstack port set --no-security-group --disable-port-security <compute_ipv6_port> 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift namespace: ipv6 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - hello-openshift replicas: 2 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift annotations: k8s.v1.cni.cncf.io/networks: ipv6 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: hello-openshift securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8080",
"oc create -f <ipv6_enabled_resource> 1",
"oc edit networks.operator.openshift.io cluster",
"spec: additionalNetworks: - name: ipv6 namespace: ipv6 1 rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipv6\", \"type\": \"macvlan\", \"master\": \"ens4\"}' 2 type: Raw",
"oc get network-attachment-definitions -A",
"NAMESPACE NAME AGE ipv6 ipv6 21h",
"[Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack [LoadBalancer] enabled = true",
"apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: \"2022-12-20T17:01:08Z\" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: \"2519\" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677",
"openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name>",
"controlPlane: name: master platform: openstack: type: USD{CONTROL_PLANE_FLAVOR} rootVolume: size: 25 types: - USD{CINDER_TYPE} replicas: 3",
"openshift-install create cluster --dir <installation_directory> 1",
"oc wait clusteroperators --all --for=condition=Progressing=false 1",
"oc patch ControlPlaneMachineSet/cluster -n openshift-machine-api --type json -p ' 1 [ { \"op\": \"add\", \"path\": \"/spec/template/machines_v1beta1_machine_openshift_io/spec/providerSpec/value/additionalBlockDevices\", 2 \"value\": [ { \"name\": \"etcd\", \"sizeGiB\": 10, \"storage\": { \"type\": \"Local\" 3 } } ] } ] '",
"oc wait --timeout=90m --for=condition=Progressing=false controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster",
"oc wait --timeout=90m --for=jsonpath='{.status.updatedReplicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster",
"oc wait --timeout=90m --for=jsonpath='{.status.replicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster",
"oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false",
"cp_machines=USD(oc get machines -n openshift-machine-api --selector='machine.openshift.io/cluster-api-machine-role=master' --no-headers -o custom-columns=NAME:.metadata.name) 1 if [[ USD(echo \"USD{cp_machines}\" | wc -l) -ne 3 ]]; then exit 1 fi 2 for machine in USD{cp_machines}; do if ! oc get machine -n openshift-machine-api \"USD{machine}\" -o jsonpath='{.spec.providerSpec.value.additionalBlockDevices}' | grep -q 'etcd'; then exit 1 fi 3 done",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Mount local-etcd to /var/lib/etcd [Mount] What=/dev/disk/by-label/local-etcd 1 Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Create local-etcd filesystem DefaultDependencies=no After=local-fs-pre.target ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 2 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )\" ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 3 [Install] RequiredBy=dev-disk-by\\x2dlabel-local\\x2detcd.device enabled: true name: create-local-etcd.service - contents: | [Unit] Description=Migrate existing data to local etcd After=var-lib-etcd.mount Before=crio.service 4 Requisite=var-lib-etcd.mount ConditionPathExists=!/var/lib/etcd/member ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 5 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi\" 6 ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 7 [Install] RequiredBy=var-lib-etcd.mount enabled: true name: migrate-to-local-etcd.service - contents: | [Unit] Description=Relabel /var/lib/etcd After=migrate-to-local-etcd.service Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/bin/bash -c \"[ -n \\\"USD(restorecon -nv /var/lib/etcd)\\\" ]\" 8 ExecStart=/usr/sbin/restorecon -R /var/lib/etcd [Install] RequiredBy=var-lib-etcd.mount enabled: true name: relabel-var-lib-etcd.service",
"oc create -f 98-var-lib-etcd.yaml",
"oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master",
"oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s",
"oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"platform: aws: lbType:",
"publish:",
"sshKey:",
"compute: platform: aws: amiID:",
"compute: platform: aws: iamRole:",
"compute: platform: aws: rootVolume: iops:",
"compute: platform: aws: rootVolume: size:",
"compute: platform: aws: rootVolume: type:",
"compute: platform: aws: rootVolume: kmsKeyARN:",
"compute: platform: aws: type:",
"compute: platform: aws: zones:",
"compute: aws: region:",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"controlPlane: platform: aws: amiID:",
"controlPlane: platform: aws: iamRole:",
"controlPlane: platform: aws: rootVolume: iops:",
"controlPlane: platform: aws: rootVolume: size:",
"controlPlane: platform: aws: rootVolume: type:",
"controlPlane: platform: aws: rootVolume: kmsKeyARN:",
"controlPlane: platform: aws: type:",
"controlPlane: platform: aws: zones:",
"controlPlane: aws: region:",
"platform: aws: amiID:",
"platform: aws: hostedZone:",
"platform: aws: hostedZoneRole:",
"platform: aws: serviceEndpoints: - name: url:",
"platform: aws: userTags:",
"platform: aws: propagateUserTags:",
"platform: aws: subnets:",
"platform: aws: preserveBootstrapIgnition:",
"compute: platform: openstack: rootVolume: size:",
"compute: platform: openstack: rootVolume: types:",
"compute: platform: openstack: rootVolume: type:",
"compute: platform: openstack: rootVolume: zones:",
"controlPlane: platform: openstack: rootVolume: size:",
"controlPlane: platform: openstack: rootVolume: types:",
"controlPlane: platform: openstack: rootVolume: type:",
"controlPlane: platform: openstack: rootVolume: zones:",
"platform: openstack: cloud:",
"platform: openstack: externalNetwork:",
"platform: openstack: computeFlavor:",
"compute: platform: openstack: additionalNetworkIDs:",
"compute: platform: openstack: additionalSecurityGroupIDs:",
"compute: platform: openstack: zones:",
"compute: platform: openstack: serverGroupPolicy:",
"controlPlane: platform: openstack: additionalNetworkIDs:",
"controlPlane: platform: openstack: additionalSecurityGroupIDs:",
"controlPlane: platform: openstack: zones:",
"controlPlane: platform: openstack: serverGroupPolicy:",
"platform: openstack: clusterOSImage:",
"platform: openstack: clusterOSImageProperties:",
"platform: openstack: defaultMachinePlatform:",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"platform: openstack: ingressFloatingIP:",
"platform: openstack: apiFloatingIP:",
"platform: openstack: externalDNS:",
"platform: openstack: loadbalancer:",
"platform: openstack: machinesSubnet:",
"controlPlane: platform: gcp: osImage: project:",
"controlPlane: platform: gcp: osImage: name:",
"compute: platform: gcp: osImage: project:",
"compute: platform: gcp: osImage: name:",
"platform: gcp: network:",
"platform: gcp: networkProjectID:",
"platform: gcp: projectID:",
"platform: gcp: region:",
"platform: gcp: controlPlaneSubnet:",
"platform: gcp: computeSubnet:",
"platform: gcp: defaultMachinePlatform: zones:",
"platform: gcp: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: gcp: defaultMachinePlatform: osDisk: diskType:",
"platform: gcp: defaultMachinePlatform: osImage: project:",
"platform: gcp: defaultMachinePlatform: osImage: name:",
"platform: gcp: defaultMachinePlatform: tags:",
"platform: gcp: defaultMachinePlatform: type:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: name:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: keyRing:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: location:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: projectID:",
"platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKeyServiceAccount:",
"platform: gcp: defaultMachinePlatform: secureBoot:",
"platform: gcp: defaultMachinePlatform: confidentialCompute:",
"platform: gcp: defaultMachinePlatform: onHostMaintenance:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"controlPlane: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"controlPlane: platform: gcp: osDisk: diskSizeGB:",
"controlPlane: platform: gcp: osDisk: diskType:",
"controlPlane: platform: gcp: tags:",
"controlPlane: platform: gcp: type:",
"controlPlane: platform: gcp: zones:",
"controlPlane: platform: gcp: secureBoot:",
"controlPlane: platform: gcp: confidentialCompute:",
"controlPlane: platform: gcp: onHostMaintenance:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: name:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: location:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:",
"compute: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:",
"compute: platform: gcp: osDisk: diskSizeGB:",
"compute: platform: gcp: osDisk: diskType:",
"compute: platform: gcp: tags:",
"compute: platform: gcp: type:",
"compute: platform: gcp: zones:",
"compute: platform: gcp: secureBoot:",
"compute: platform: gcp: confidentialCompute:",
"compute: platform: gcp: onHostMaintenance:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/installing_on_openstack/index |
Release notes | Release notes Red Hat Developer Hub 1.4 Release notes for Red Hat Developer Hub 1.4 Red Hat Customer Content Services | [
"developerHub: flavor: <flavor>"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/release_notes/index |
Chapter 11. Swap Space | Chapter 11. Swap Space 11.1. What is Swap Space? Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory. Swap space can be a dedicated swap partition (recommended), a swap file, or a combination of swap partitions and swap files. The size of your swap should be equal to twice your computer's physical RAM for up to 2 GB of physical RAM. For physical RAM above 2 GB, the size of your swap should be equal to the amount of physical RAM above 2 GB. The size of your swap should never be less than 32 MB. Using this basic formula, a system with 2 GB of physical RAM would have 4 GB of swap, while one with 3 GB of physical RAM would have 5 GB of swap. Note Unfortunately, deciding on the amount of swap to allocate to Red Hat Enterprise Linux is more of an art than a science, so hard rules are not possible. Each system's most used applications should be accounted for when determining swap size. Important File systems and LVM2 volumes assigned as swap space cannot be in use when being modified. For example, no system processes can be assigned the swap space, as well as no amount of swap should be allocated and used by the kernel. Use the free and cat /proc/swaps commands to verify how much and where swap is in use. The best way to achieve swap space modifications is to boot your system in rescue mode, and then follow the instructions (for each scenario) in the remainder of this chapter. Refer to Chapter 5, Basic System Recovery for instructions on booting into rescue mode. When prompted to mount the file system, select Skip . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/swap_space |
Chapter 35. Using Metering on Streams for Apache Kafka | Chapter 35. Using Metering on Streams for Apache Kafka You can use the Metering tool that is available on OpenShift to generate metering reports from different data sources. As a cluster administrator, you can use metering to analyze what is happening in your cluster. You can either write your own, or use predefined SQL queries to define how you want to process data from the different data sources you have available. Using Prometheus as a default data source, you can generate reports on pods, namespaces, and most other OpenShift resources. You can also use the OpenShift Metering operator to analyze your installed Streams for Apache Kafka components to determine whether you are in compliance with your Red Hat subscription. To use metering with Streams for Apache Kafka, you must first install and configure the Metering operator on OpenShift Container Platform. 35.1. Metering resources Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides. Metering is managed using the following CRDs: Table 35.1. Metering resources Name Description MeteringConfig Configures the metering stack for deployment. Contains customizations and configuration options to control each component that makes up the metering stack. Reports Controls what query to use, when, and how often the query should be run, and where to store the results. ReportQueries Contains the SQL queries used to perform analysis on the data contained within ReportDataSources . ReportDataSources Controls the data available to ReportQueries and Reports. Allows configuring access to different databases for use within metering. 35.2. Metering labels for Streams for Apache Kafka The following table lists the metering labels for Streams for Apache Kafka infrastructure components and integrations. Table 35.2. Metering Labels Label Possible values com.company Red_Hat rht.prod_name Red_Hat_Application_Foundations rht.prod_ver 2025.Q1 rht.comp AMQ_Streams rht.comp_ver 2.9 rht.subcomp Infrastructure cluster-operator entity-operator topic-operator user-operator zookeeper Application kafka-broker kafka-connect kafka-connect-build kafka-mirror-maker2 kafka-mirror-maker cruise-control kafka-bridge kafka-exporter drain-cleaner rht.subcomp_t infrastructure application Examples Infrastructure example (where the infrastructure component is entity-operator ) com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2025.Q1 rht.comp=AMQ_Streams rht.comp_ver=2.9 rht.subcomp=entity-operator rht.subcomp_t=infrastructure Application example (where the integration deployment name is kafka-bridge ) com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2025.Q1 rht.comp=AMQ_Streams rht.comp_ver=2.9 rht.subcomp=kafka-bridge rht.subcomp_t=application | [
"com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2025.Q1 rht.comp=AMQ_Streams rht.comp_ver=2.9 rht.subcomp=entity-operator rht.subcomp_t=infrastructure",
"com.company=Red_Hat rht.prod_name=Red_Hat_Application_Foundations rht.prod_ver=2025.Q1 rht.comp=AMQ_Streams rht.comp_ver=2.9 rht.subcomp=kafka-bridge rht.subcomp_t=application"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/using-metering-str |
Registry | Registry OpenShift Container Platform 4.7 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"oc create secret generic image-registry-private-configuration-user --from-file=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: s3: bucket: <bucket-name> region: <region-name>",
"regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local",
"oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry",
"oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"oc policy add-role-to-user registry-viewer <user_name>",
"oc policy add-role-to-user registry-editor <user_name>",
"oc get nodes",
"oc debug nodes/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443",
"sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000",
"Login Succeeded!",
"sh-4.2# podman pull name.io/image",
"sh-4.2# podman tag name.io/image image-registry.openshift-image-registry.svc:5000/openshift/image",
"sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/image",
"oc get pods -n openshift-image-registry",
"NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m",
"oc logs deployments/image-registry -n openshift-image-registry",
"2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002",
"curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20",
"HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF",
"oc adm policy add-cluster-role-to-user prometheus-scraper <username>",
"openshift: version: 1.0 metrics: enabled: true",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"oc get secret -n openshift-ingress router-certs-default -o go-template='{{index .data \"tls.crt\"}}' | base64 -d | sudo tee /etc/pki/ca-trust/source/anchors/USD{HOST}.crt > /dev/null",
"sudo update-ca-trust enable",
"sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1",
"oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>",
"spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/registry/index |
Chapter 3. Reporting data to Red Hat | Chapter 3. Reporting data to Red Hat Your subscription contract requires you to send your .tar files to Red Hat for accounting purposes. Your Red Hat partner representative will instruct you on how to send the files to Red Hat. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_inside/1.3/html/red_hat_ansible_inside_reporting_guide/reporting-return-data |
Chapter 9. 4.6 Release Notes | Chapter 9. 4.6 Release Notes 9.1. New Features This following major enhancements have been introduced in Red Hat Update Infrastructure 4.6. New Pulp version: 3.28 This update introduces a newer version of Pulp, 3.28, the latest LTS version. This update significantly modifies the Pulp database model, addressing many of the deadlock issues RHUI encountered when synchronizing large content volumes simultaneously. Redis is no longer included Redis is no longer included as part of the Pulp installation. Shared mount options for RHUI Installer have been enhanced With this update, the RHUI Installer's shared storage mounting options have been enhanced. You can now use the force option to alter the remote storage. For more information, see rhui-installer --help . rhui-manager status now includes CDS SSL certificate expiration checks With this update, CDS NGinx SSL certificate expiration checks are now available in the rhui-manager status report. RHUI Installer automatically initiates rhui-subscription-sync With this update, RHUI Installer automatically initiates rhui-subscription-sync after a successful installation. You no longer need to manually initiate the syncronization. 9.2. Bug Fixes The following bugs have been fixed in Red Hat Update Infrastructure 4.6 that have a significant impact on users. RHUI avoids not recognizing new RHEL minor version repositories Previously, RHUI failed to recognize new minor version RHEL repositories because of cached mappings. With this update, the issue has been fixed. Special characters can be used in the admin password Previosuly, you could not use certain special characters in the RHUI administrator password. With this update, the issue has been fixed. RHUI Installer did not handle rhui_active_login_file Previously, due to a problem in RHUI Installer, it failed to successfully process the rhui_active_login_file variable. With this update, the issue has been fixed. | null | https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/release_notes/assembly_4-6-release-notes_release-notes |
Chapter 36. Language | Chapter 36. Language Only producer is supported The Language component allows you to send Exchange to an endpoint which executes a script by any of the supported Languages in Camel. By having a component to execute language scripts, it allows more dynamic routing capabilities. For example by using the Routing Slip or Dynamic Router EIPs you can send messages to language endpoints where the script is dynamic defined as well. This component is provided out of the box in camel-core and hence no additional JARs is needed. You only have to include additional Camel components if the language of choice mandates it, such as using Groovy or JavaScript languages. 36.1. URI format You can refer to an external resource for the script using same notation as supported by the other Languages in Camel. 36.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 36.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 36.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 36.3. Component Options The Language component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 36.4. Endpoint Options The Language endpoint is configured using URI syntax: with the following path and query parameters: 36.4.1. Path Parameters (2 parameters) Name Description Default Type languageName (producer) Required Sets the name of the language to use. Enum values: bean constant exchangeProperty file groovy header javascript jsonpath mvel ognl ref simple spel sql terser tokenize xpath xquery xtokenize String resourceUri (producer) Path to the resource, or a reference to lookup a bean in the Registry to use as the resource. String 36.4.2. Query Parameters (7 parameters) Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean binary (producer) Whether the script is binary content or text content. By default the script is read as text content (eg java.lang.String). false boolean cacheScript (producer) Whether to cache the compiled script and reuse Notice reusing the script can cause side effects from processing one Camel org.apache.camel.Exchange to the org.apache.camel.Exchange. false boolean contentCache (producer) Sets whether to use resource content cache or not. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean script (producer) Sets the script to execute. String transform (producer) Whether or not the result of the script should be used as message body. This options is default true. true boolean 36.5. Message Headers The following message headers can be used to affect the behavior of the component Header Description CamelLanguageScript The script to execute provided in the header. Takes precedence over script configured on the endpoint. 36.6. Examples For example you can use the Simple language to Message Translator a message. You can also provide the script as a header as shown below. Here we use XPath language to extract the text from the <foo> tag. Object out = producer.requestBodyAndHeader("language:xpath", "<foo>Hello World</foo>", Exchange.LANGUAGE_SCRIPT, "/foo/text()"); assertEquals("Hello World", out); 36.7. Loading scripts from resources You can specify a resource uri for a script to load in either the endpoint uri, or in the Exchange.LANGUAGE_SCRIPT header. The uri must start with one of the following schemes: file:, classpath:, or http: By default the script is loaded once and cached. However you can disable the contentCache option and have the script loaded on each evaluation. For example if the file myscript.txt is changed on disk, then the updated script is used: You can refer to the resource similar to the other Languages in Camel by prefixing with "resource:" as shown below. 36.8. Spring Boot Auto-Configuration When using language with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-language-starter</artifactId> </dependency> The component supports 3 options, which are listed below. Name Description Default Type camel.component.language.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.language.enabled Whether to enable auto configuration of the language component. This is enabled by default. Boolean camel.component.language.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"language://languageName[:script][?options]",
"language://languageName:resource:scheme:location][?options]",
"language:languageName:resourceUri",
"Object out = producer.requestBodyAndHeader(\"language:xpath\", \"<foo>Hello World</foo>\", Exchange.LANGUAGE_SCRIPT, \"/foo/text()\"); assertEquals(\"Hello World\", out);",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-language-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-language-component-starter |
24.6. Monitoring Performance with Net-SNMP | 24.6. Monitoring Performance with Net-SNMP Red Hat Enterprise Linux 6 includes the Net-SNMP software suite, which includes a flexible and extensible Simple Network Management Protocol ( SNMP ) agent. This agent and its associated utilities can be used to provide performance data from a large number of systems to a variety of tools which support polling over the SNMP protocol. This section provides information on configuring the Net-SNMP agent to securely provide performance data over the network, retrieving the data using the SNMP protocol, and extending the SNMP agent to provide custom performance metrics. 24.6.1. Installing Net-SNMP The Net-SNMP software suite is available as a set of RPM packages in the Red Hat Enterprise Linux software distribution. Table 24.2, "Available Net-SNMP packages" summarizes each of the packages and their contents. Table 24.2. Available Net-SNMP packages Package Provides net-snmp The SNMP Agent Daemon and documentation. This package is required for exporting performance data. net-snmp-libs The netsnmp library and the bundled management information bases (MIBs). This package is required for exporting performance data. net-snmp-utils SNMP clients such as snmpget and snmpwalk . This package is required in order to query a system's performance data over SNMP. net-snmp-perl The mib2c utility and the NetSNMP Perl module. net-snmp-python An SNMP client library for Python. To install any of these packages, use the yum command in the following form: yum install package For example, to install the SNMP Agent Daemon and SNMP clients used in the rest of this section, type the following at a shell prompt: Note that you must have superuser privileges (that is, you must be logged in as root ) to run this command. For more information on how to install new packages in Red Hat Enterprise Linux, see Section 8.2.4, "Installing Packages" . | [
"~]# yum install net-snmp net-snmp-libs net-snmp-utils"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-System_Monitoring_Tools-Net-SNMP |
7.2. Migrate from Synchronization to Trust Manually Using ID Views | 7.2. Migrate from Synchronization to Trust Manually Using ID Views You can use ID views to manually change the POSIX attributes that AD previously generated for AD users. Create a backup of the original synchronized user or group entries. Create a trust with the synchronized domain. For information about creating trusts, see Chapter 5, Creating Cross-forest Trusts with Active Directory and Identity Management . For every synchronized user or group, preserve the UID and GIDs generated by IdM by doing one of the following: Individually create an ID view applied to the specific host and add user ID overrides to the view. Create user ID overrides in the Default Trust View. For details, see Defining a Different Attribute Value for a User Account on Different Hosts . Note Only IdM users can manage ID views. AD users cannot. Delete the original synchronized user or group entries. For general information on using ID views in Active Directory environments, see Chapter 8, Using ID Views in Active Directory Environments . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/id-view-migration |
Chapter 13. GenericKafkaListenerConfiguration schema reference | Chapter 13. GenericKafkaListenerConfiguration schema reference Used in: GenericKafkaListener Full list of GenericKafkaListenerConfiguration schema properties Configuration for Kafka listeners. 13.1. brokerCertChainAndKey The brokerCertChainAndKey property is only used with listeners that have TLS encryption enabled. You can use the property to provide your own Kafka listener certificates. Example configuration for a loadbalancer external listener with TLS encryption enabled listeners: #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key # ... 13.2. externalTrafficPolicy The externalTrafficPolicy property is used with loadbalancer and nodeport listeners. When exposing Kafka outside of OpenShift you can choose Local or Cluster . Local avoids hops to other nodes and preserves the client IP, whereas Cluster does neither. The default is Cluster . 13.3. loadBalancerSourceRanges The loadBalancerSourceRanges property is only used with loadbalancer listeners. When exposing Kafka outside of OpenShift use source ranges, in addition to labels and annotations, to customize how a service is created. Example source ranges configured for a loadbalancer listener listeners: #... - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 # ... # ... 13.4. class The class property is only used with ingress listeners. You can configure the Ingress class using the class property. Example of an external listener of type ingress using Ingress class nginx-internal listeners: #... - name: external port: 9094 type: ingress tls: true configuration: class: nginx-internal # ... # ... 13.5. preferredNodePortAddressType The preferredNodePortAddressType property is only used with nodeport listeners. Use the preferredNodePortAddressType property in your listener configuration to specify the first address type checked as the node address. This property is useful, for example, if your deployment does not have DNS support, or you only want to expose a broker internally through an internal DNS or IP address. If an address of this type is found, it is used. If the preferred address type is not found, AMQ Streams proceeds through the types in the standard order of priority: ExternalDNS ExternalIP Hostname InternalDNS InternalIP Example of an external listener configured with a preferred node port address type listeners: #... - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS # ... # ... 13.6. useServiceDnsDomain The useServiceDnsDomain property is only used with internal and cluster-ip listeners. It defines whether the fully-qualified DNS names that include the cluster service suffix (usually .cluster.local ) are used. With useServiceDnsDomain set as false , the advertised addresses are generated without the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc . With useServiceDnsDomain set as true , the advertised addresses are generated with the service suffix; for example, my-cluster-kafka-0.my-cluster-kafka-brokers.myproject.svc.cluster.local . Default is false . Example of an internal listener configured to use the Service DNS domain listeners: #... - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true # ... # ... If your OpenShift cluster uses a different service suffix than .cluster.local , you can configure the suffix using the KUBERNETES_SERVICE_DNS_DOMAIN environment variable in the Cluster Operator configuration. 13.7. GenericKafkaListenerConfiguration schema properties Property Description brokerCertChainAndKey Reference to the Secret which holds the certificate and private key pair which will be used for this listener. The certificate can optionally contain the whole chain. This field can be used only with listeners with enabled TLS encryption. CertAndKeySecretSource externalTrafficPolicy Specifies whether the service routes external traffic to node-local or cluster-wide endpoints. Cluster may cause a second hop to another node and obscures the client source IP. Local avoids a second hop for LoadBalancer and Nodeport type services and preserves the client source IP (when supported by the infrastructure). If unspecified, OpenShift will use Cluster as the default.This field can be used only with loadbalancer or nodeport type listener. string (one of [Local, Cluster]) loadBalancerSourceRanges A list of CIDR ranges (for example 10.0.0.0/8 or 130.211.204.1/32 ) from which clients can connect to load balancer type listeners. If supported by the platform, traffic through the loadbalancer is restricted to the specified CIDR ranges. This field is applicable only for loadbalancer type services and is ignored if the cloud provider does not support the feature. This field can be used only with loadbalancer type listener. string array bootstrap Bootstrap configuration. GenericKafkaListenerConfigurationBootstrap brokers Per-broker configurations. GenericKafkaListenerConfigurationBroker array ipFamilyPolicy Specifies the IP Family Policy used by the service. Available options are SingleStack , PreferDualStack and RequireDualStack . SingleStack is for a single IP family. PreferDualStack is for two IP families on dual-stack configured clusters or a single IP family on single-stack clusters. RequireDualStack fails unless there are two IP families on dual-stack configured clusters. If unspecified, OpenShift will choose the default value based on the service type. string (one of [RequireDualStack, SingleStack, PreferDualStack]) ipFamilies Specifies the IP Families used by the service. Available options are IPv4 and IPv6 . If unspecified, OpenShift will choose the default value based on the ipFamilyPolicy setting. string (one or more of [IPv6, IPv4]) array createBootstrapService Whether to create the bootstrap service or not. The bootstrap service is created by default (if not specified differently). This field can be used with the loadBalancer type listener. boolean class Configures a specific class for Ingress and LoadBalancer that defines which controller will be used. This field can only be used with ingress and loadbalancer type listeners. If not specified, the default controller is used. For an ingress listener, set the ingressClassName property in the Ingress resources. For a loadbalancer listener, set the loadBalancerClass property in the Service resources. string finalizers A list of finalizers which will be configured for the LoadBalancer type Services created for this listener. If supported by the platform, the finalizer service.kubernetes.io/load-balancer-cleanup to make sure that the external load balancer is deleted together with the service.For more information, see https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#garbage-collecting-load-balancers . This field can be used only with loadbalancer type listeners. string array maxConnectionCreationRate The maximum connection creation rate we allow in this listener at any time. New connections will be throttled if the limit is reached. integer maxConnections The maximum number of connections we allow for this listener in the broker at any time. New connections are blocked if the limit is reached. integer preferredNodePortAddressType Defines which address type should be used as the node address. Available types are: ExternalDNS , ExternalIP , InternalDNS , InternalIP and Hostname . By default, the addresses will be used in the following order (the first one found will be used): ExternalDNS ExternalIP InternalDNS InternalIP Hostname This field is used to select the preferred address type, which is checked first. If no address is found for this address type, the other types are checked in the default order. This field can only be used with nodeport type listener. string (one of [ExternalDNS, ExternalIP, Hostname, InternalIP, InternalDNS]) useServiceDnsDomain Configures whether the OpenShift service DNS domain should be used or not. If set to true , the generated addresses will contain the service DNS domain suffix (by default .cluster.local , can be configured using environment variable KUBERNETES_SERVICE_DNS_DOMAIN ). Defaults to false .This field can be used only with internal and cluster-ip type listeners. boolean | [
"listeners: # - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key",
"listeners: # - name: external port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #",
"listeners: # - name: external port: 9094 type: ingress tls: true configuration: class: nginx-internal #",
"listeners: # - name: external port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #",
"listeners: # - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-GenericKafkaListenerConfiguration-reference |
Chapter 8. Build and run microservices applications on the OpenShift image for JBoss EAP XP | Chapter 8. Build and run microservices applications on the OpenShift image for JBoss EAP XP You can build and run your microservices applications on the OpenShift image for JBoss EAP XP. Note JBoss EAP XP is supported only on OpenShift 4 and later versions. Use the following workflow to build and run a microservices application on the OpenShift image for JBoss EAP XP by using the source-to-image (S2I) process. Note Default cloud-default-mp-config layer provide a standalone configuration file, which is based on the standalone-microprofile-ha.xml file. For more information about the server configuration files included in JBoss EAP XP, see the Standalone server configuration files section. This workflow uses the microprofile-config quickstart as an example. The quickstart provides a small, specific working example that can be used as a reference for your own project. See the microprofile-config quickstart that ships with JBoss EAP XP 5.0.0 for more information. Additional resources For more information about the server configuration files included in JBoss EAP XP, see Standalone server configuration files . 8.1. Preparing OpenShift for application deployment Prepare OpenShift for application deployment. Prerequisites You have installed an operational OpenShift instance. For more information, see the Installing and Configuring OpenShift Container Platform Clusters book on Red Hat Customer Portal . Procedure Log in to your OpenShift instance using the oc login command. Create a new project in OpenShift. A project allows a group of users to organize and manage content separately from other groups. You can create a project in OpenShift using the following command. For example, for the microprofile-config quickstart, create a new project named eap-demo using the following command. 8.2. Building and Deploying JBoss EAP XP Application Images using S2I Follow the source-to-image (S2I) workflow to build reproducible container images for a JBoss EAP XP application. These generated container images include the application deployment and ready-to-run JBoss EAP XP servers. The S2I workflow takes source code from a Git repository and injects it into a container that's based on the language and framework you want to use. After the S2I workflow is completed, the src code is compiled, the application is packaged and is deployed to the JBoss EAP XP server. Prerequisites You have an active Red Hat customer account. You have a Registry Service Account. Follow the instructions on the Red Hat Customer Portal to create an authentication token using a registry service account . You have downloaded the OpenShift secret YAML file, which you can use to pull images from Red Hat Ecosystem Catalog. For more information, see OpenShift Secret . You used the oc login command to log in to OpenShift. You have installed Helm. For more information, see Installing Helm . You have installed the repository for the JBoss EAP Helm charts by entering this command in the management CLI: Procedure Create a file named helm.yaml using the following YAML content: build: uri: https://github.com/jboss-developer/jboss-eap-quickstarts.git ref: XP_5.0.0.GA contextDir: microprofile-config mode: s2i deploy: replicas: 1 Use the following command to deploy your JBoss EAP XP application on Openshift. Note This procedure is very similar to Building application images using source-to-image in OpenShift . For more information about that procedure see Using JBoss EAP on OpenShift Container Platform . Verification Access the application using curl . You get the output MyPropertyFileConfigValue confirming that the application is deployed. 8.3. Completing post-deployment tasks for JBoss EAP XP source-to-image (S2I) application Depending on your application, you might need to complete some tasks after your OpenShift application has been built and deployed. Examples of post-deployment tasks include the following: Exposing a service so that the application is viewable from outside of OpenShift. Scaling your application to a specific number of replicas. Procedure Get the service name of your application using the following command. Optional : Expose the main service as a route so you can access your application from outside of OpenShift. For example, for the microprofile-config quickstart, use the following command to expose the required service and port. Get the URL of the route. Access the application in your web browser using the URL. The URL is the value of the HOST/PORT field from command's output. Note For JBoss EAP XP 5.0.0 GA distribution, the Microprofile Config quickstart does not reply to HTTPS GET requests to the application's root context. This enhancement is only available in the {JBossXPShortName101} GA distribution. For example, to interact with the Microprofile Config application, the URL might be http:// HOST_PORT_Value /config/value in your browser. If your application does not use the JBoss EAP root context, append the context of the application to the URL. For example, for the microprofile-config quickstart, the URL might be http:// HOST_PORT_VALUE /microprofile-config/ . Optionally, you can scale up the application instance by running the following command. This command increases the number of replicas to 3. For example, for the microprofile-config quickstart, use the following command to scale up the application. Additional Resources For more information about JBoss EAP XP Quickstarts, see the Use the Quickstarts section in the Using MicroProfile in JBoss EAP guide. | [
"oc new-project PROJECT_NAME",
"oc new-project eap-demo",
"helm repo add jboss-eap https://jbossas.github.io/eap-charts/",
"build: uri: https://github.com/jboss-developer/jboss-eap-quickstarts.git ref: XP_5.0.0.GA contextDir: microprofile-config mode: s2i deploy: replicas: 1",
"helm install microprofile-config -f helm.yaml jboss-eap/eap-xp5",
"curl https://USD(oc get route microprofile-config --template='{{ .spec.host }}')/config/value",
"oc get service",
"oc expose service/microprofile-config --port=8080",
"oc get route",
"oc scale deploymentconfig DEPLOYMENTCONFIG_NAME --replicas=3",
"oc scale deployment/microprofile-config --replicas=3"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/using-the-openshift-image-for-jboss-eap-xp_default |
5.5. Creating Replicated Volumes | 5.5. Creating Replicated Volumes Replicated volume creates copies of files across multiple bricks in the volume. Use replicated volumes in environments where high-availability and high-reliability are critical. Use gluster volume create to create different types of volumes, and gluster volume info to verify successful volume creation. Prerequisites A trusted storage pool has been created, as described in Section 4.1, "Adding Servers to the Trusted Storage Pool" . Understand how to start and stop volumes, as described in Section 5.10, "Starting Volumes" . Warning Red Hat no longer recommends the use of two-way replication without arbiter bricks as Two-way replication without arbiter bricks is deprecated with Red Hat Gluster Storage 3.4 and no longer supported. This change affects both replicated and distributed-replicated volumes that do not use arbiter bricks. Two-way replication without arbiter bricks is being deprecated because it does not provide adequate protection from split-brain conditions. Even in distributed-replicated configurations, two-way replication cannot ensure that the correct copy of a conflicting file is selected without the use of a tie-breaking node. While a dummy node can be used as an interim solution for this problem, Red Hat strongly recommends that all volumes that currently use two-way replication without arbiter bricks are migrated to use either arbitrated replication or three-way replication. Instructions for migrating a two-way replicated volume without arbiter bricks to an arbitrated replicated volume are available in the 5.7.5. Converting to an arbitrated volume . Information about three-way replication is available in Section 5.5.1, "Creating Three-way Replicated Volumes" and Section 5.6.1, "Creating Three-way Distributed Replicated Volumes" . 5.5.1. Creating Three-way Replicated Volumes Three-way replicated volume creates three copies of files across multiple bricks in the volume. The number of bricks must be equal to the replica count for a replicated volume. To protect against server and disk failures, it is recommended that the bricks of the volume are from different servers. Synchronous three-way replication is now fully supported in Red Hat Gluster Storage. It is recommended that three-way replicated volumes use JBOD, but use of hardware RAID with three-way replicated volumes is also supported. Figure 5.2. Illustration of a Three-way Replicated Volume Creating three-way replicated volumes Run the gluster volume create command to create the replicated volume. The syntax is # gluster volume create NEW-VOLNAME [replica COUNT ] [transport tcp | rdma (Deprecated) | tcp,rdma] NEW-BRICK... The default value for transport is tcp . Other options can be passed such as auth.allow or auth.reject . See Section 11.1, "Configuring Volume Options" for a full list of parameters. Example 5.3. Replicated Volume with Three Storage Servers The order in which bricks are specified determines how bricks are replicated with each other. For example, every n bricks, where 3 is the replica count forms a replica set. This is illustrated in Figure 5.2, "Illustration of a Three-way Replicated Volume" . Run # gluster volume start VOLNAME to start the volume. Run gluster volume info command to optionally display the volume information. Important By default, the client-side quorum is enabled on three-way replicated volumes to minimize split-brain scenarios. For more information on client-side quorum, see Section 11.15.1.2, "Configuring Client-Side Quorum" 5.5.2. Creating Sharded Replicated Volumes Sharding breaks files into smaller pieces so that they can be distributed across the bricks that comprise a volume. This is enabled on a per-volume basis. When sharding is enabled, files written to a volume are divided into pieces. The size of the pieces depends on the value of the volume's features.shard-block-size parameter. The first piece is written to a brick and given a GFID like a normal file. Subsequent pieces are distributed evenly between bricks in the volume (sharded bricks are distributed by default), but they are written to that brick's .shard directory, and are named with the GFID and a number indicating the order of the pieces. For example, if a file is split into four pieces, the first piece is named GFID and stored normally. The other three pieces are named GFID.1, GFID.2, and GFID.3 respectively. They are placed in the .shard directory and distributed evenly between the various bricks in the volume. Because sharding distributes files across the bricks in a volume, it lets you store files with a larger aggregate size than any individual brick in the volume. Because the file pieces are smaller, heal operations are faster, and geo-replicated deployments can sync the small pieces of a file that have changed, rather than syncing the entire aggregate file. Sharding also lets you increase volume capacity by adding bricks to a volume in an ad-hoc fashion. 5.5.2.1. Supported use cases Sharding has one supported use case: in the context of providing Red Hat Gluster Storage as a storage domain for Red Hat Enterprise Virtualization, to provide storage for live virtual machine images. Note that sharding is also a requirement for this use case, as it provides significant performance improvements over implementations. Important Quotas are not compatible with sharding. Important Sharding is supported in new deployments only, as there is currently no upgrade path for this feature. Example 5.4. Example: Three-way replicated sharded volume Set up a three-way replicated volume, as described in the Red Hat Gluster Storage Administration Guide : https://access.redhat.com/documentation/en-US/red_hat_gluster_storage/3.5/html/Administration_Guide/sect-Creating_Replicated_Volumes.html#Creating_Three-way_Replicated_Volumes . Before you start your volume, enable sharding on the volume. Start the volume and ensure it is working as expected. 5.5.2.2. Configuration Options Sharding is enabled and configured at the volume level. The configuration options are as follows. features.shard Enables or disables sharding on a specified volume. Valid values are enable and disable . The default value is disable . Note that this only affects files created after this command is run; files created before this command is run retain their old behaviour. features.shard-block-size Specifies the maximum size of the file pieces when sharding is enabled. The supported value for this parameter is 512MB. Note that this only affects files created after this command is run; files created before this command is run retain their old behaviour. 5.5.2.3. Finding the pieces of a sharded file When you enable sharding, you might want to check that it is working correctly, or see how a particular file has been sharded across your volume. To find the pieces of a file, you need to know that file's GFID. To obtain a file's GFID, run: Once you have the GFID, you can run the following command on your bricks to see how this file has been distributed: | [
"gluster v create glutervol data replica 3 transport tcp server1:/rhgs/brick1 server2:/rhgs/brick2 server3:/rhgs/brick3 volume create: glutervol: success: please start the volume to access",
"gluster v start glustervol volume start: glustervol: success",
"gluster volume set test-volume features.shard enable",
"gluster volume test-volume start gluster volume info test-volume",
"gluster volume set volname features.shard enable",
"gluster volume set volname features.shard-block-size 32MB",
"getfattr -d -m. -e hex path_to_file",
"ls /rhgs/*/.shard -lh | grep GFID"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-Creating_Replicated_Volumes |
8.2.3. Displaying Package Information | 8.2.3. Displaying Package Information To display information about one or more packages (glob expressions are valid here as well), use the following command: yum info package_name For example, to display information about the abrt package, type: The yum info package_name command is similar to the rpm -q --info package_name command, but provides as additional information the ID of the Yum repository the RPM package is found in (look for the From repo: line in the output). You can also query the Yum database for alternative and useful information about a package by using the following command: yumdb info package_name This command provides additional information about a package, including the check sum of the package (and algorithm used to produce it, such as SHA-256), the command given on the command line that was invoked to install the package (if any), and the reason that the package is installed on the system (where user indicates it was installed by the user, and dep means it was brought in as a dependency). For example, to display additional information about the yum package, type: For more information on the yumdb command, see the yumdb (8) manual page. Listing Files Contained in a Package repoquery is a program for querying information from yum repositories similarly to rpm queries. You can query both package groups and individual packages. To list all files contained in a specific package, type: repoquery --list package_name Replace package_name with a name of the package you want to inspect. For more information on the repoquery command, see the repoquery manual page. To find out which package provides a specific file, you can use the yum provides command, described in Finding which package owns a file | [
"~]# yum info abrt Loaded plugins: product-id, refresh-packagekit, subscription-manager Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 Installed Packages Name : abrt Arch : x86_64 Version : 1.0.7 Release : 5.el6 Size : 578 k Repo : installed From repo : rhel Summary : Automatic bug detection and reporting tool URL : https://fedorahosted.org/abrt/ License : GPLv2+ Description: abrt is a tool to help users to detect defects in applications : and to create a bug report with all informations needed by : maintainer to fix it. It uses plugin system to extend its : functionality.",
"~]# yumdb info yum Loaded plugins: product-id, refresh-packagekit, subscription-manager yum-3.2.27-4.el6.noarch checksum_data = 23d337ed51a9757bbfbdceb82c4eaca9808ff1009b51e9626d540f44fe95f771 checksum_type = sha256 from_repo = rhel from_repo_revision = 1298613159 from_repo_timestamp = 1298614288 installed_by = 4294967295 reason = user releasever = 6.1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-displaying_package_information |
25.3.4. Configuring a System z Network Device for Network Root File System | 25.3.4. Configuring a System z Network Device for Network Root File System To add a network device that is required to access the root file system, you only have to change the boot options. The boot options can be in a parameter file (refer to Chapter 26, Parameter and Configuration Files ) or part of a zipl.conf on a DASD or FCP-attached SCSI LUN prepared with the zipl boot loader. There is no need to recreate the initramfs. Dracut (the mkinitrd successor that provides the functionality in the initramfs that in turn replaces initrd ) provides a boot parameter to activate network devices on System z early in the boot process: rd_ZNET= . As input, this parameter takes a comma-separated list of the NETTYPE (qeth, lcs, ctc), two (lcs, ctc) or three (qeth) device bus IDs, and optional additional parameters consisting of key-value pairs corresponding to network device sysfs attributes. This parameter configures and activates the System z network hardware. The configuration of IP addresses and other network specifics works the same as for other platforms. Refer to the dracut documentation for more details. cio_ignore for the network channels is handled transparently on boot. Example boot options for a root file system accessed over the network through NFS: | [
"root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!0.0.0009 rd_ZNET=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfs‐server.subdomain.domain:eth0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ap-s390info-adding_a_network_device-configuring_network_device_for_network_root_file_system |
Chapter 53. Annotation Inheritance | Chapter 53. Annotation Inheritance Abstract JAX-RS annotations can be inherited by subclasses and classes implementing annotated interfaces. The inheritance mechanism allows for subclasses and implementation classes to override the annotations inherited from its parents. Overview Inheritance is one of the more powerful mechanisms in Java because it allows developers to create generic objects that can then be specialized to meet particular needs. JAX-RS keeps this power by allowing the annotations used in mapping classes to resources to be inherited from super classes. JAX-RS's annotation inheritance also extends to support for interfaces. Implementation classes inherit the JAX-RS annotations used in the interface they implement. The JAX-RS inheritance rules do provide a mechanism for overriding inherited annotations. However, it is not possible to completely remove JAX-RS annotations from a construct that inherits them from a super class or interface. Inheritance rules Resource classes inherit any JAX-RS annotations from the interface(s) it implements. Resource classes also inherit any JAX-RS annotations from any super classes they extend. Annotations inherited from a super class take precedence over annotations inherited from am interface. In the code sample shown in Example 53.1, "Annotation inheritance" , the Kaijin class' getMonster() method inherits the @Path , @GET , and @PathParam annotations from the Kaiju interface. Example 53.1. Annotation inheritance Overriding inherited annotations Overriding inherited annotations is as easy as providing new annotations. If the subclass, or implementation class, provides any of its own JAX-RS annotations for a method then all of the JAX-RS annotations for that method are ignored. In the code sample shown in Example 53.2, "Overriding annotation inheritance" , the Kaijin class' getMonster() method does not inherit any of the annotations from the Kaiju interface. The implementation class overrides the @Produces annotation which causes all of the annotations from the interface to be ignored. Example 53.2. Overriding annotation inheritance | [
"public interface Kaiju { @GET @Path(\"/{id}\") public Monster getMonster(@PathParam(\"id\") int id); } @Path(\"/kaijin\") public class Kaijin implements Kaiju { public Monster getMonster(int id) { } }",
"public interface Kaiju { @GET @Path(\"/{id}\") @Produces(\"text/xml\"); public Monster getMonster(@PathParam(\"id\") int id); } @Path(\"/kaijin\") public class Kaijin implements Kaiju { @GET @Path(\"/{id}\") @Produces(\"application/octect-stream\"); public Monster getMonster(@PathParam(\"id\") int id) { } }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/restannotateinherit |
14.4. Samba Security Modes | 14.4. Samba Security Modes There are only two types of security modes for Samba, share-level and user-level , which are collectively known as security levels . Share-level security can only be implemented in one way, while user-level security can be implemented in one of four different ways. The different ways of implementing a security level are called security modes . 14.4.1. User-Level Security User-level security is the default setting for Samba. Even if the security = user directive is not listed in the smb.conf file, it is used by Samba. If the server accepts the client's username/password, the client can then mount multiple shares without specifying a password for each instance. Samba can also accept session-based username/password requests. The client maintains multiple authentication contexts by using a unique UID for each logon. In smb.conf , the security = user directive that sets user-level security is: | [
"[GLOBAL] security = user"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-samba-security-modes |
Chapter 8. Managing Cluster Resources | Chapter 8. Managing Cluster Resources This chapter describes various commands you can use to manage cluster resources. It provides information on the following procedures. Section 8.1, "Manually Moving Resources Around the Cluster" Section 8.2, "Moving Resources Due to Failure" Section 8.4, "Enabling, Disabling, and Banning Cluster Resources" Section 8.5, "Disabling a Monitor Operation" 8.1. Manually Moving Resources Around the Cluster You can override the cluster and force resources to move from their current location. There are two occasions when you would want to do this: When a node is under maintenance, and you need to move all resources running on that node to a different node When individually specified resources needs to be moved To move all resources running on a node to a different node, you put the node in standby mode. For information on putting a cluster node in standby node, see Section 4.4.5, "Standby Mode" . You can move individually specified resources in either of the following ways. You can use the pcs resource move command to move a resource off a node on which it is currently running, as described in Section 8.1.1, "Moving a Resource from its Current Node" . You can use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. For information on this command, see Section 8.1.2, "Moving a Resource to its Preferred Node" . 8.1.1. Moving a Resource from its Current Node To move a resource off the node on which it is currently running, use the following command, specifying the resource_id of the resource as defined. Specify the destination_node if you want to indicate on which node to run the resource that you are moving. Note When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. You can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the original node; where the resources can run at that point depends on how you have configured your resources initially. If you specify the --master parameter of the pcs resource move command, the scope of the constraint is limited to the master role and you must specify master_id rather than resource_id . You can optionally configure a lifetime parameter for the pcs resource move command to indicate a period of time the constraint should remain. You specify the units of a lifetime parameter according to the format defined in ISO 8601, which requires that you specify the unit as a capital letter such as Y (for years), M (for months), W (for weeks), D (for days), H (for hours), M (for minutes), and S (for seconds). To distinguish a unit of minutes(M) from a unit of months(M), you must specify PT before indicating the value in minutes. For example, a lifetime parameter of 5M indicates an interval of five months, while a lifetime parameter of PT5M indicates an interval of five minutes. The lifetime parameter is checked at intervals defined by the cluster-recheck-interval cluster property. By default this value is 15 minutes. If your configuration requires that you check this parameter more frequently, you can reset this value with the following command. You can optionally configure a --wait[= n ] parameter for the pcs resource move command to indicate the number of seconds to wait for the resource to start on the destination node before returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the default resource timeout will be used. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for one hour and thirty minutes. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for thirty minutes. For information on resource constraints, see Chapter 7, Resource Constraints . 8.1.2. Moving a Resource to its Preferred Node After a resource has moved, either due to a failover or to an administrator manually moving the node, it will not necessarily move back to its original node even after the circumstances that caused the failover have been corrected. To relocate resources to their preferred node, use the following command. A preferred node is determined by the current cluster status, constraints, resource location, and other settings and may change over time. If you do not specify any resources, all resource are relocated to their preferred nodes. This command calculates the preferred node for each resource while ignoring resource stickiness. After calculating the preferred node, it creates location constraints which will cause the resources to move to their preferred nodes. Once the resources have been moved, the constraints are deleted automatically. To remove all constraints created by the pcs resource relocate run command, you can enter the pcs resource relocate clear command. To display the current status of resources and their optimal node ignoring resource stickiness, enter the pcs resource relocate show command. | [
"pcs resource move resource_id [ destination_node ] [--master] [lifetime= lifetime ]",
"pcs property set cluster-recheck-interval= value",
"pcs resource move resource1 example-node2 lifetime=PT1H30M",
"pcs resource move resource1 example-node2 lifetime=PT30M",
"pcs resource relocate run [ resource1 ] [ resource2 ]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-manageresource-HAAR |
Chapter 9. Downgrading AMQ Streams | Chapter 9. Downgrading AMQ Streams If you are encountering issues with the version of AMQ Streams you upgraded to, you can revert your installation to the version. You can perform a downgrade to: Revert your Cluster Operator to the AMQ Streams version. Section 9.1, "Downgrading the Cluster Operator to a version" Downgrade all Kafka brokers and client applications to the Kafka version. Section 9.2, "Downgrading Kafka" If the version of AMQ Streams does not support the version of Kafka you are using, you can also downgrade Kafka as long as the log message format versions appended to messages match. 9.1. Downgrading the Cluster Operator to a version If you are encountering issues with AMQ Streams, you can revert your installation. This procedure describes how to downgrade a Cluster Operator deployment to a version. Prerequisites An existing Cluster Operator deployment is available. You have downloaded the installation files for the version . Procedure Take note of any configuration changes made to the existing Cluster Operator resources (in the /install/cluster-operator directory). Any changes will be overwritten by the version of the Cluster Operator. Revert your custom resources to reflect the supported configuration options available for the version of AMQ Streams you are downgrading to. Update the Cluster Operator. Modify the installation files for the version according to the namespace the Cluster Operator is running in. On Linux, use: On MacOS, use: If you modified one or more environment variables in your existing Cluster Operator Deployment , edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to use those environment variables. When you have an updated configuration, deploy it along with the rest of the installation resources: oc replace -f install/cluster-operator Wait for the rolling updates to complete. Get the image for the Kafka pod to ensure the downgrade was successful: oc get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}' The image tag shows the new AMQ Streams version followed by the Kafka version. For example, NEW-STRIMZI-VERSION -kafka- CURRENT-KAFKA-VERSION . Your Cluster Operator was downgraded to the version. 9.2. Downgrading Kafka Kafka version downgrades are performed by the Cluster Operator. 9.2.1. Kafka version compatibility for downgrades Kafka downgrades are dependent on compatible current and target Kafka versions , and the state at which messages have been logged. You cannot revert to the Kafka version if that version does not support any of the inter.broker.protocol.version settings which have ever been used in that cluster, or messages have been added to message logs that use a newer log.message.format.version . The inter.broker.protocol.version determines the schemas used for persistent metadata stored by the broker, such as the schema for messages written to __consumer_offsets . If you downgrade to a version of Kafka that does not understand an inter.broker.protocol.version that has (ever) been previously used in the cluster the broker will encounter data it cannot understand. If the target downgrade version of Kafka has: The same log.message.format.version as the current version, the Cluster Operator downgrades by performing a single rolling restart of the brokers. A different log.message.format.version , downgrading is only possible if the running cluster has always had log.message.format.version set to the version used by the downgraded version. This is typically only the case if the upgrade procedure was aborted before the log.message.format.version was changed. In this case, the downgrade requires: Two rolling restarts of the brokers if the interbroker protocol of the two versions is different A single rolling restart if they are the same Downgrading is not possible if the new version has ever used a log.message.format.version that is not supported by the version, including when the default value for log.message.format.version is used. For example, this resource can be downgraded to Kafka version 2.6.0 because the log.message.format.version has not been changed: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 2.7.0 config: log.message.format.version: "2.6" # ... The downgrade would not be possible if the log.message.format.version was set at "2.7" or a value was absent (so that the parameter took the default value for a 2.7.0 broker of 2.7). 9.2.2. Downgrading Kafka brokers and client applications This procedure describes how you can downgrade a AMQ Streams Kafka cluster to a lower () version of Kafka, such as downgrading from 2.7.0 to 2.6.0. Prerequisites For the Kafka resource to be downgraded, check: IMPORTANT: Compatibility of Kafka versions . The Cluster Operator, which supports both versions of Kafka, is up and running. The Kafka.spec.kafka.config does not contain options that are not supported by the Kafka version being downgraded to. The Kafka.spec.kafka.config has a log.message.format.version and inter.broker.protocol.version that is supported by the Kafka version being downgraded to. Procedure Update the Kafka cluster configuration. oc edit kafka KAFKA-CONFIGURATION-FILE Change the Kafka.spec.kafka.version to specify the version. For example, if downgrading from Kafka 2.7.0 to 2.6.0: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: version: 2.6.0 1 config: log.message.format.version: "2.6" 2 inter.broker.protocol.version: "2.6" 3 # ... 1 Kafka version is changed to the version. 2 Message format version is unchanged. 3 Inter-broker protocol version is unchanged. Note You must format the value of log.message.format.version and inter.broker.protocol.version as a string to prevent it from being interpreted as a floating point number. If the image for the Kafka version is different from the image defined in STRIMZI_KAFKA_IMAGES for the Cluster Operator, update Kafka.spec.kafka.image . See Section 8.1.3.1, "Kafka version and image mappings" Save and exit the editor, then wait for rolling updates to complete. Check the update in the logs or by watching the pod state transitions: oc logs -f CLUSTER-OPERATOR-POD-NAME | grep -E "Kafka version downgrade from [0-9.]+ to [0-9.]+, phase ([0-9]+) of \1 completed" oc get pod -w Check the Cluster Operator logs for an INFO level message: Reconciliation # NUM (watch) Kafka( NAMESPACE / NAME ): Kafka version downgrade from FROM-VERSION to TO-VERSION , phase 1 of 1 completed Downgrade all client applications (consumers) to use the version of the client binaries. The Kafka cluster and clients are now using the Kafka version. If you are reverting back to a version of AMQ Streams earlier than 0.22, which uses ZooKeeper for the storage of topic metadata, delete the internal topic store topics from the Kafka cluster. oc run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete Additional resources Topic Operator topic store | [
"sed -i 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: my-cluster-operator-namespace /' install/cluster-operator/*RoleBinding*.yaml",
"replace -f install/cluster-operator",
"get pod my-cluster-kafka-0 -o jsonpath='{.spec.containers[0].image}'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 2.7.0 config: log.message.format.version: \"2.6\" #",
"edit kafka KAFKA-CONFIGURATION-FILE",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: version: 2.6.0 1 config: log.message.format.version: \"2.6\" 2 inter.broker.protocol.version: \"2.6\" 3 #",
"logs -f CLUSTER-OPERATOR-POD-NAME | grep -E \"Kafka version downgrade from [0-9.]+ to [0-9.]+, phase ([0-9]+) of \\1 completed\"",
"get pod -w",
"Reconciliation # NUM (watch) Kafka( NAMESPACE / NAME ): Kafka version downgrade from FROM-VERSION to TO-VERSION , phase 1 of 1 completed",
"run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/deploying_and_upgrading_amq_streams_on_openshift/assembly-downgrade-str |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/integrating_partner_content/proc_providing-feedback-on-red-hat-documentation |
Chapter 7. Uninstalling Directory Server | Chapter 7. Uninstalling Directory Server In certain situations, administrators what to uninstall Directory Server from a host. This chapter describes this procedure. 7.1. Uninstalling Directory Server If you no longer require Directory Server running on a server, uninstall the packages as described in this section. Prerequisites Directory Server installed on the host Procedure Remove all instances from the replication topology. If you instance is not a member of a replication topology skip this step. For details about removing an instance from the topology, see Removing a Supplier from the Replication Topology in the Red Hat Directory Server Administration Guide. Remove all instances from the server. For details, see Removing a Directory Server Instance in the Red Hat Directory Server Administration Guide. Remove the Directory Server packages: # yum module remove redhat-ds Optionally, disable the dirsrv-11-for-rhel-8-x86_64-rpms repository: # subscription-manager repos --disable=dirsrv-11-for-rhel-8-x86_64-rpms Repository 'dirsrv-11-for-rhel-8-x86_64-rpms' is disabled for this system. Optionally, remove the Red Hat Directory Server subscription from the system: Important If you remove a subscription that provides additional products than Directory Server, you will not be able to install or update packages for these products. List the subscriptions attached to the host: # subscription-manager list --consumed Subscription Name: Example Subscription ... Pool-ID: 5ab6a8df96b03fd30aba9a9c58da57a1 ... Remove the subscription using the pool id from the step: # subscription-manager remove --pool= 5ab6a8df96b03fd30aba9a9c58da57a1 2 local certificates have been deleted. The entitlement server successfully removed these pools: 5ab6a8df96b03fd30aba9a9c58da57a1 The entitlement server successfully removed these serial numbers: 1658239469356282126 Additional resources For further details about using the subscription-manager utility, see the Using and Configuring Subscription Manager guide. | [
"yum module remove redhat-ds",
"subscription-manager repos --disable=dirsrv-11-for-rhel-8-x86_64-rpms Repository 'dirsrv-11-for-rhel-8-x86_64-rpms' is disabled for this system.",
"subscription-manager list --consumed Subscription Name: Example Subscription Pool-ID: 5ab6a8df96b03fd30aba9a9c58da57a1",
"subscription-manager remove --pool= 5ab6a8df96b03fd30aba9a9c58da57a1 2 local certificates have been deleted. The entitlement server successfully removed these pools: 5ab6a8df96b03fd30aba9a9c58da57a1 The entitlement server successfully removed these serial numbers: 1658239469356282126"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/installation_guide/assembly_uninstalling-directory-server_installation-guide |
Chapter 5. Exposing the registry | Chapter 5. Exposing the registry By default, the OpenShift image registry is secured during cluster installation so that it serves traffic through TLS. Unlike versions of OpenShift Container Platform, the registry is not exposed outside of the cluster at the time of installation. 5.1. Exposing a default registry manually Instead of logging in to the default OpenShift image registry from within the cluster, you can gain external access to it by exposing it with a route. This external access enables you to log in to the registry from outside the cluster using the route address and to tag and push images to an existing project by using the route host. Prerequisites The following prerequisites are automatically performed: Deploy the Registry Operator. Deploy the Ingress Operator. You have access to the cluster as a user with the cluster-admin role. Procedure You can expose the route by using the defaultRoute parameter in the configs.imageregistry.operator.openshift.io resource. To expose the registry using the defaultRoute : Set defaultRoute to true by running the following command: USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Get the default registry route by running the following command: USD HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') Get the certificate of the Ingress Operator by running the following command: USD oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // "router-certs-default"' -r) -n openshift-ingress --confirm Move the extracted certificate to the system's trusted CA directory by running the following command: USD sudo mv tls.crt /etc/pki/ca-trust/source/anchors/ Enable the cluster's default certificate to trust the route by running the following command: USD sudo update-ca-trust enable Log in with podman using the default route by running the following command: USD sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST 5.2. Exposing a secure registry manually Instead of logging in to the OpenShift image registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images to an existing project by using the route host. Prerequisites The following prerequisites are automatically performed: Deploy the Registry Operator. Deploy the Ingress Operator. You have access to the cluster as a user with the cluster-admin role. Procedure You can expose the route by using DefaultRoute parameter in the configs.imageregistry.operator.openshift.io resource or by using custom routes. To expose the registry using DefaultRoute : Set DefaultRoute to True : USD oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge Log in with podman : USD HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') USD podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1 1 --tls-verify=false is needed if the cluster's default certificate for routes is untrusted. You can set a custom, trusted certificate as the default certificate with the Ingress Operator. To expose the registry using custom routes: Create a secret with your route's TLS keys: USD oc create secret tls public-route-tls \ -n openshift-image-registry \ --cert=</path/to/tls.crt> \ --key=</path/to/tls.key> This step is optional. If you do not create a secret, the route uses the default TLS configuration from the Ingress Operator. On the Registry Operator: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls ... Note Only set secretName if you are providing a custom TLS configuration for the registry's route. Troubleshooting Error creating TLS secret | [
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm",
"sudo mv tls.crt /etc/pki/ca-trust/source/anchors/",
"sudo update-ca-trust enable",
"sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1",
"oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/registry/securing-exposing-registry |
A.11. numastat | A.11. numastat The numastat tool is provided by the numactl package, and displays memory statistics (such as allocation hits and misses) for processes and the operating system on a per-NUMA-node basis. The default tracking categories for the numastat command are outlined as follows: numa_hit The number of pages that were successfully allocated to this node. numa_miss The number of pages that were allocated on this node because of low memory on the intended node. Each numa_miss event has a corresponding numa_foreign event on another node. numa_foreign The number of pages initially intended for this node that were allocated to another node instead. Each numa_foreign event has a corresponding numa_miss event on another node. interleave_hit The number of interleave policy pages successfully allocated to this node. local_node The number of pages successfully allocated on this node, by a process on this node. other_node The number of pages allocated on this node, by a process on another node. Supplying any of the following options changes the displayed units to megabytes of memory (rounded to two decimal places), and changes other specific numastat behaviors as described below. -c Horizontally condenses the displayed table of information. This is useful on systems with a large number of NUMA nodes, but column width and inter-column spacing are somewhat unpredictable. When this option is used, the amount of memory is rounded to the nearest megabyte. -m Displays system-wide memory usage information on a per-node basis, similar to the information found in /proc/meminfo . -n Displays the same information as the original numastat command ( numa_hit , numa_miss , numa_foreign , interleave_hit , local_node , and other_node ), with an updated format, using megabytes as the unit of measurement. -p pattern Displays per-node memory information for the specified pattern. If the value for pattern is comprised of digits, numastat assumes that it is a numerical process identifier. Otherwise, numastat searches process command lines for the specified pattern. Command line arguments entered after the value of the -p option are assumed to be additional patterns for which to filter. Additional patterns expand, rather than narrow, the filter. -s Sorts the displayed data in descending order so that the biggest memory consumers (according to the total column) are listed first. Optionally, you can specify a node, and the table will be sorted according to the node column. When using this option, the node value must follow the -s option immediately, as shown here: Do not include white space between the option and its value. -v Displays more verbose information. Namely, process information for multiple processes will display detailed information for each process. -V Displays numastat version information. -z Omits table rows and columns with only zero values from the displayed information. Note that some near-zero values that are rounded to zero for display purposes will not be omitted from the displayed output. | [
"numastat -s2"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-numastat |
8.4. Migrating NIS Domains to IdM | 8.4. Migrating NIS Domains to IdM If you are managing a Linux environment and want to migrate disparate NIS domains with different UIDs and GIDs into a modern identity management solution, you can use ID views to set host specific UIDs and GIDs for existing hosts to prevent changing the permissions on existing files and directories. The process for the migration follows these steps: Create the users and groups in the IdM domain. For details, see Adding Stage or Active Users Adding and Removing User Groups Use ID views for existing hosts to override the IDs IdM generated during the user creation: Create an individual ID view. Add ID overrides for the users and groups to the ID view. Assign the ID view to the specific hosts. For details, see Defining a Different Attribute Value for a User Account on Different Hosts . Installing and Uninstalling Identity Management Clients in the Linux Domain Identity, Authentication, and Policy Guide . Decommission the NIS domains. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/id-views-nis |
16.4. Setting up Synchronization Between Active Directory and Directory Server | 16.4. Setting up Synchronization Between Active Directory and Directory Server Configuring synchronization is very similar to configuring replication. It requires configuring the database as a supplier with a changelog and creating an agreement to define synchronization. A common user identity, a synchronization user, connects to the Active Directory (AD) domain controller (DC) to send updates from Directory Server to AD and to check AD for updates to synchronize them to Directory Server. Note To enable users to use their accounts on Directory Server and AD, synchronize passwords. Password synchronization requires to use an encrypted connection. Synchronization for user and group entries is passive from the AD side. Directory Server send updates to AD and polls for updates on the AD domain. For passwords, the AD server requires a separate password service. This service actively sends password changes from the AD domain to Directory Server. 16.4.1. Step 1: Enabling TLS on the Directory Server Host The Password Sync service requires to synchronize passwords over an encrypted connection. If TLS is not yet enabled in your Directory Server instance, enable it. For details, see Section 9.4.1, "Enabling TLS in Directory Server" . 16.4.2. Step 2: Enabling Password Complexity in the AD Domain Enable password complexity in the AD domain using a group policy. For example: Open the Group Policy Management console and create a new Group Policy Object (GPO) in the domain. For details about using the Group Policy Management console, see the Windows documentation. Right-click the GPO, and select Edit to open the Group Policy Management Editor . Navigate to Computer Configuration Windows Settings Security Settings Account Policies Password Policy , and double-click the policy named Password must meet complexity requirements . Enable the policy and click OK . Close the Group Policy Management Editor and the Group Policy Management console. 16.4.3. Step 3: Extracting the CA Certificate from AD Extract the root certificate authority (CA) certificate and copy it to the Directory Server host: If your AD CA certificate is self-signed: On an AD DC with the Certification Authority application installed, press the Super key + R combination to open the Run dialog. Enter the certsrv.msc command and click OK to open the Certification Authority application. Right-click on the name of the local Certificate Authority and choose Properties . On the General tab, select the certificate to export in the CA certificates field and click View Certificate . On the Details tab, click Copy to File to start the Certificate Export Wizard . Click , and then select Base-64 encoded X.509 (.CER) . Specify a suitable directory and file name for the exported file. Click to export the certificate, and then click Finish . Copy the root CA certificate to the Directory Server host. If your AD CA certificate is signed by an external CA: Determine the root CA. For example: The example shows that the AD server's CA certificate is signed by CN=Demo CA-1 , which is signed by CN=Demo Root CA 2 . This means that CN=Demo Root CA 2 is the root CA. Contact the operator of the root CA about how to retrieve the CA certificate. Copy the root CA certificate to the Directory Server host. 16.4.4. Step 4: Extracting the CA Certificate from the Directory Server's NSS Database To extract the CA certificate from the Directory Server's NSS database: List the certificates in the database: Extract the CA certificate from the database. For example, to extract the CA certificate with the Example CA nickname and store it in the /root/ds-ca.crt file: Copy the CA certificate to the AD DC. 16.4.5. Step 5: Creating the Synchronization Accounts For synchronization between AD and Directory Server, you require one account in AD and one in Directory Server. This section explains further details about creating these accounts. Creating an Account in Directory Server The AD DCs use a Directory Server account in the Password Sync service to synchronize passwords to Directory Server. For example, to create the cn=pw_sync_user,dc=config user in Directory Server: Create the user account: This creates the cn=pw_sync_user,dc=config account and sets its expiration time to January 01 2038. Important For security reasons, do not create the account in the synchronized subtree. Set an ACI at the top of the subtree that will be synchronized and grants write and compare permissions to the cn=pw_sync_user,dc=config user. For example, to add such an ACI to the ou=People,dc=example,dc=com entry: Configure that Directory Server can store passwords in clear text in the changelog: Because Directory Server uses a different password encryption than Active Directory, Directory Server must send the password in clear text to the Windows server. However, the clear text password is sent over a TLS encrypted connection that is required for password synchronization and is, therefore, not exposed to the network. Creating an Account in AD To send and receive updates, Directory Server uses an AD account when connecting to AD. This account must be a member of the Domain Admins group or have equivalent permissions in AD. For details about creating AD accounts, see your AD documentation. 16.4.6. Step 6: Installing the Password Sync Service Install the Password Sync on every writable DC in your AD. For details about installing the Password Sync service, see the Installing the password synchronization service section in the Red Hat Directory Server Installation Guide . For a list of operating systems running the Password Sync service that Red Hat supports, see the Red Hat Directory Server Release Notes . 16.4.7. Step 7: Adding the CA Certificate Directory Server uses to the Password Sync Service's Certificate Database On every DC that has the Password Sync service installed, add the CA certificate Directory Server uses to the Password Sync service's certificate database: Change into the C:\Program Files\Red Hat Directory Password Synchronization\ directory: Create the certificate databases in the current directory: The certutil.exe utility prompts to set a password to the new database it creates. Import the CA certificate used by the Directory Server instance. You copied this certificate in Section 16.4.4, "Step 4: Extracting the CA Certificate from the Directory Server's NSS Database" to the Windows DC. For example, to import the certificate from the C:\ds-ca.crt file and store it in the database with the Example CA nickname: Optionally, verify that the CA certificate was stored correctly in the database: Reboot the Windows DC. The Password Sync service is not available until you reboot the system. Note If any AD user accounts exist when you install Password Sync , the service cannot synchronize the passwords for those accounts until the passwords are changed. This happens because Password Sync cannot decrypt a password once it has been stored in Active Directory. For details about enforcing a password reset for AD users, see the Active Directory documentation. 16.4.8. Step 8: Adding the CA Certificate AD uses to Directory Server's Certificate Database On the Directory Server host, add the CA certificate AD uses to the certificate database: Import the CA certificate AD uses. You copied this certificate in Section 16.4.3, "Step 3: Extracting the CA Certificate from AD" to the Directory Server host. For example, to import the certificate from the /root/ad-ca.crt file and store it in the database with the Example CA nickname: Optionally, verify that the CA certificate was stored correctly in the database: 16.4.9. Step 9: Configuring the Database for Synchronization and Creating the Synchronization Agreement This section describes how to configure the database for synchronization and create the synchronization agreement. 16.4.9.1. Configuring the Database for Synchronization and Creating the Synchronization Agreement Using the Command Line The following example assumes that you have Directory Server running on a host named ds.example.com and an AD DC running on a host named win-server.ad.example.com . The following procedure describes how to configure synchronization between these hosts: Enable replication for the suffix: This command configures the ds.example.com host as a supplier for the dc=example,dc=com suffix and sets the replica ID for this entry to 1 . Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Add the synchronization agreement and initialize the agreement. For example: This command creates a replication agreement named example-agreement . The replication agreement defines settings, such as AD DC's host name, protocol, and authentication information, Directory Server uses when connecting and synchronizing data to the DC. After the agreement is created, Directory Server initializes the agreement. To initialize the agreement later, omit the --init option. Note that synchronization does not start before you initialized the agreement. For details about initializing a synchronization agreement, see Section 16.11.2.1, "Performing a Full Synchronization Using the Command Line" . Optionally, pass the --sync-users="on" and --sync-groups="on" option to the command to automatically synchronize new Windows users and groups to Directory Server. For further details about the options used in the command, enter: Verify that the initialization was successful: 16.4.9.2. Configuring the Database for Synchronization and Creating the Synchronization Agreement Using the Web Console The following example assumes that you have Directory Server running on a host named ds.example.com and an AD DC running on a host named win-server.ad.example.com . The following procedure describes how to configure synchronization between these hosts: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Enable replication for the suffix: Open the Replication menu. Select the dc=example,dc=com suffix, and click Enable Replication . Select Supplier in the Replication Role field and enter a replica ID. For example: These settings configure the ds.example.com host as a supplier for the dc=example,dc=com suffix and sets the replica ID for this entry to 1 . Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Click Enable Replication . Add the synchronization agreement and initialize agreement: Open the Replication menu and select the Winsync Agreements entry. Click Create Agreement and fill the fields. For example: These settings will create a synchronization agreement named example-agreement . The synchronization agreement defines settings, such as the DC's host name, protocol, and authentication information, Directory Server uses when connecting and synchronizing data. Optionally, select Sync New Windows Users and Sync New Windows Groups to automatically synchronize new Windows users and groups to Directory Server. After the agreement is created, Directory Server initializes the agreement. To initialize the agreement later, do not select Do Online Initialization . Note that synchronization does not start before you initialized the agreement. For details about initializing a synchronization agreement, see Section 16.11.2.2, "Performing a Full Synchronization Using the Web Console" . Click Save Agreement . Verify that the initialization was successful: Open the Replication menu. Select the Agreements entry. If the initialization completed successfully, the web console displays the Error (0) Replica acquired successfully: Incremental update succeeded message in the Last Update Status column. Depending of the amount of data to synchronize, the initialization can take up to several hours. | [
"openssl s_client -connect adserver.example.com:636 CONNECTED(00000003) depth=1 C = US, O = Demo Company, OU = IT, CN = Demo CA-28 verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/O=Demo Company/OU=IT/CN=adserver.example.com i:/C=US/O=Demo Company/OU=IT/CN=Demo CA-1 1 s:/C=US/O=Demo Company/OU=IT/CN=Demo CA-1 i:/C=US/O=Demo Company/OU=IT/CN=Demo Root CA 2",
"certutil -d /etc/dirsrv/slapd- instance_name / -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Server-Cert u,u,u Example CA C,,",
"certutil -d /etc/dirsrv/slapd- instance_name / -L -n \"Example CA\" -a > /root/ds-ca.crt",
"ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=pw_sync_user ,cn=config objectClass: inetorgperson objectClass: person objectClass: top cn: pw_sync_user sn: pw_sync_user userPassword: password passwordExpirationTime: 20380101000000Z",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: ou=People,dc=example,dc=com changetype: modify add: aci aci: (targetattr=\"userPassword\")(version 3.0;acl \" Password synchronization \"; allow (write,compare) userdn=\"ldap:/// cn=pw_sync_user,dc=config \";)",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-unhashed-pw-switch=on",
"> cd \"C:\\Program Files\\Red Hat Directory Password Synchronization\\\"",
"> certutil.exe -d . -N",
"> certutil.exe -d . -A -n \" Example CA \" -t CT,, -a -i \"C:\\ds-ca.crt\"",
"> certutil.exe -d . -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Example CA CT,,",
"> certutil -d /etc/dirsrv/slapd- instance_name / -A -n \" Example CA \" -t CT,, -a -i /root/ad-ca.crt",
"> certutil -d /etc/dirsrv/slapd- instance_name / -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Example CA CT,,",
"dsconf -D \"cn=Directory Manager\" ldap://ds.example.com replication enable --suffix=\"dc=example,dc=com\" --role=\"supplier\" --replica-id=1",
"dsconf -D \"cn=Directory Manager\" ldap://ds.example.com repl-winsync-agmt create --suffix=\"dc=example,dc=com\" --host=\"win-server.ad.example.com\" --port=636 --conn-protocol=\"LDAPS\" --bind-dn=\"cn= user_name,cn=Users,dc=ad,dc=example,dc=com \" --bind-passwd=\" password \" --win-subtree=\" cn=Users,dc=example,dc=com \" --ds-subtree=\" ou=People,dc=example,dc=com \" --win-domain=\" AD \" --init example-agreement",
"dsconf -D \"cn=Directory Manager\" ldap://ds.example.com repl-agmt --help",
"dsconf -D \"cn=Directory Manager\" ldap://ds.example.com repl-winsync-agmt init-status --suffix=\" dc=example,dc=com \" example-agreement Agreement successfully initialized."
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/setting_up_windows_synchronization_between_active_directory_and_directory_server |
8.4.3. KSM Variables and Monitoring | 8.4.3. KSM Variables and Monitoring Kernel same-page merging (KSM) stores monitoring data in the /sys/kernel/mm/ksm/ directory. Files in this directory are updated by the kernel and are an accurate record of KSM usage and statistics. The variables in the list below are also configurable variables in the /etc/ksmtuned.conf file, as noted above. Files in /sys/kernel/mm/ksm/ : full_scans Full scans run. merge_across_nodes Whether pages from different NUMA nodes can be merged. pages_shared Total pages shared. pages_sharing Pages currently shared. pages_to_scan Pages not scanned. pages_unshared Pages no longer shared. pages_volatile Number of volatile pages. run Whether the KSM process is running. sleep_millisecs Sleep milliseconds. These variables can be manually tuned using the virsh node-memory-tune command. For example, the following specifies the number of pages to scan before the shared memory service goes to sleep: KSM tuning activity is stored in the /var/log/ksmtuned log file if the DEBUG=1 line is added to the /etc/ksmtuned.conf file. The log file location can be changed with the LOGFILE parameter. Changing the log file location is not advised and may require special configuration of SELinux settings. | [
"virsh node-memory-tune --shm-pages-to-scan number"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-ksm-ksm_variables_and_monitoring |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/federate_with_identity_service/making-open-source-more-inclusive |
Chapter 2. Working with pods | Chapter 2. Working with pods 2.1. Using pods A pod is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. 2.1.1. Understanding pods Pods are the rough equivalent of a machine instance (physical or virtual) to a Container. Each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking. Pods have a lifecycle; they are defined, then they are assigned to run on a node, then they run until their container(s) exit or they are removed for some other reason. Pods, depending on policy and exit code, might be removed after exiting, or can be retained to enable access to the logs of their containers. OpenShift Dedicated treats pods as largely immutable; changes cannot be made to a pod definition while it is running. OpenShift Dedicated implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users. Warning Bare pods that are not managed by a replication controller will be not rescheduled upon node disruption. 2.1.2. Example pod configurations OpenShift Dedicated leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. The following is an example definition of a pod. It demonstrates many features of pods, most of which are discussed in other topics and thus only briefly mentioned here: Pod object definition (YAML) kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - "1000000" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: ["ALL"] resources: limits: memory: "100Mi" cpu: "1" requests: memory: "100Mi" cpu: "1" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi 1 Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the metadata hash. 2 The pod restart policy with possible values Always , OnFailure , and Never . The default value is Always . 3 OpenShift Dedicated defines a security context for containers which specifies whether they are allowed to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as needed. 4 containers specifies an array of one or more container definitions. 5 The container specifies where external storage volumes are mounted within the container. 6 Specify the volumes to provide for the pod. Volumes mount at the specified path. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 7 Each container in the pod is instantiated from its own container image. 8 The pod defines storage volumes that are available to its container(s) to use. If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . Note This pod definition does not include attributes that are filled by OpenShift Dedicated automatically after the pod is created and its lifecycle begins. The Kubernetes pod documentation has details about the functionality and purpose of pods. 2.1.3. Additional resources For more information on pods and storage see Understanding persistent storage and Understanding ephemeral storage . 2.2. Viewing pods As an administrator, you can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. 2.2.1. About pods OpenShift Dedicated leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance (physical or virtual) to a container. You can view a list of pods associated with a specific project or view usage statistics about pods. 2.2.2. Viewing pods in a project You can view a list of pods associated with the current project, including the number of replica, the current status, number or restarts and the age of the pod. Procedure To view the pods in a project: Change to the project: USD oc project <project-name> Run the following command: USD oc get pods For example: USD oc get pods Example output NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m Add the -o wide flags to view the pod IP address and the node where the pod is located. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none> 2.2.3. Viewing pod usage statistics You can display usage statistics about pods, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: Run the following command: USD oc adm top pods For example: USD oc adm top pods -n openshift-console Example output NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi Run the following command to view the usage statistics for pods with labels: USD oc adm top pod --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . For example: USD oc adm top pod --selector='name=my-pod' 2.2.4. Viewing resource logs You can view the log for various resources in the OpenShift CLI ( oc ) and web console. Logs read from the tail, or end, of the log. Prerequisites Access to the OpenShift CLI ( oc ). Procedure (UI) In the OpenShift Dedicated console, navigate to Workloads Pods or navigate to the pod through the resource you want to investigate. Note Some resources, such as builds, do not have pods to query directly. In such instances, you can locate the Logs link on the Details page for the resource. Select a project from the drop-down menu. Click the name of the pod you want to investigate. Click Logs . Procedure (CLI) View the log for a specific pod: USD oc logs -f <pod_name> -c <container_name> where: -f Optional: Specifies that the output follows what is being written into the logs. <pod_name> Specifies the name of the pod. <container_name> Optional: Specifies the name of a container. When a pod has more than one container, you must specify the container name. For example: USD oc logs ruby-58cd97df55-mww7r USD oc logs -f ruby-57f7f4855b-znl92 -c ruby The contents of log files are printed out. View the log for a specific resource: USD oc logs <object_type>/<resource_name> 1 1 Specifies the resource type and name. For example: USD oc logs deployment/ruby The contents of log files are printed out. 2.3. Configuring an OpenShift Dedicated cluster for pods As an administrator, you can create and maintain an efficient cluster for pods. By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions. 2.3.1. Configuring how pods behave after restart A pod restart policy determines how OpenShift Dedicated responds when Containers in that pod exit. The policy applies to all Containers in that pod. The possible values are: Always - Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. The default is Always . OnFailure - Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes. Never - Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit. After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure: Condition Controller Type Restart Policy Pods that are expected to terminate (such as batch computations) Job OnFailure or Never Pods that are expected to not terminate (such as web servers) Replication controller Always . Pods that must run one-per-machine Daemon set Any If a Container on a pod fails and the restart policy is set to OnFailure , the pod stays on the node and the Container is restarted. If you do not want the Container to restart, use a restart policy of Never . If an entire pod fails, OpenShift Dedicated starts a new pod. Developers must address the possibility that applications might be restarted in a new pod. In particular, applications must handle temporary files, locks, incomplete output, and so forth caused by runs. Note Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OpenShift Dedicated from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster using cloud provider integration. Install the cluster as if it was in a no-cloud environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. For details on how OpenShift Dedicated uses restart policy with failed Containers, see the Example States in the Kubernetes documentation. 2.3.2. Limiting the bandwidth available to pods You can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods. Procedure To limit the bandwidth on a pod: Write an object definition JSON file, and specify the data traffic speed using kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations. For example, to limit both pod egress and ingress bandwidth to 10M/s: Limited Pod object definition { "kind": "Pod", "spec": { "containers": [ { "image": "openshift/hello-openshift", "name": "hello-openshift" } ] }, "apiVersion": "v1", "metadata": { "name": "iperf-slow", "annotations": { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } } } Create the pod using the object definition: USD oc create -f <file_or_dir_path> 2.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up A pod disruption budget allows the specification of safety constraints on pods during operations, such as draining a node for maintenance. PodDisruptionBudget is an API object that specifies the minimum number or percentage of replicas that must be up at a time. Setting these in projects can be helpful during node maintenance (such as scaling a cluster down or a cluster upgrade) and is only honored on voluntary evictions (not on node failures). A PodDisruptionBudget object's configuration consists of the following key parts: A label selector, which is a label query over a set of pods. An availability level, which specifies the minimum number of pods that must be available simultaneously, either: minAvailable is the number of pods must always be available, even during a disruption. maxUnavailable is the number of pods can be unavailable during a disruption. Note Available refers to the number of pods that has condition Ready=True . Ready=True refers to the pod that is able to serve requests and should be added to the load balancing pools of all matching services. A maxUnavailable of 0% or 0 or a minAvailable of 100% or equal to the number of replicas is permitted but can block nodes from being drained. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Dedicated. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. You can check for pod disruption budgets across all projects with the following: USD oc get poddisruptionbudget --all-namespaces Note The following example contains some values that are specific to OpenShift Dedicated on AWS. Example output NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #... The PodDisruptionBudget is considered healthy when there are at least minAvailable pods running in the system. Every pod above that limit can be evicted. Note Depending on your pod priority and preemption settings, lower-priority pods might be removed despite their pod disruption budget requirements. 2.3.3.1. Specifying the number of pods that must be up with pod disruption budgets You can use a PodDisruptionBudget object to specify the minimum number or percentage of replicas that must be up at a time. Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The minimum number of pods that must be available simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Or: apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod 1 PodDisruptionBudget is part of the policy/v1 API group. 2 The maximum number of pods that can be unavailable simultaneously. This can be either an integer or a string specifying a percentage, for example, 20% . 3 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. Leave this parameter blank, for example selector {} , to select all pods in the project. Run the following command to add the object to project: USD oc create -f </path/to/file> -n <project_name> 2.4. Providing sensitive data to pods by using secrets Additional resources Some applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an administrator, you can use Secret objects to provide this information without exposing that information in clear text. 2.4.1. Understanding secrets The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Dedicated client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. YAML Secret object definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary . 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod's service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume). 2.4.1.1. Types of secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/basic-auth : Use with Basic authentication kubernetes.io/dockercfg : Use as an image pull secret kubernetes.io/dockerconfigjson : Use as an image pull secret kubernetes.io/service-account-token : Use to obtain a legacy service account API token kubernetes.io/ssh-auth : Use with SSH key authentication kubernetes.io/tls : Use with TLS certificate authorities Specify type: Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. For examples of creating different types of secrets, see Understanding how to create secrets . 2.4.1.2. Secret data keys Secret keys must be in a DNS subdomain. 2.4.1.3. Automatically generated image pull secrets By default, OpenShift Dedicated creates an image pull secret for each service account. Note Prior to OpenShift Dedicated 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Dedicated 4.16, this service account API token secret is no longer created. After upgrading to 4, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 2.4.2. Understanding how to create secrets As an administrator you must create a secret before developers can create the pods that depend on that secret. When creating secrets: Create a secret object that contains the data you want to keep secret. The specific data required for each secret type is descibed in the following sections. Example YAML object that creates an opaque secret apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB 1 Specifies the type of secret. 2 Specifies encoded string and data. 3 Specifies decoded string and data. Use either the data or stringdata fields, not both. Update the pod's service account to reference the secret: YAML of a service account that uses a secret apiVersion: v1 kind: ServiceAccount ... secrets: - name: test-secret Create a pod, which consumes the secret as an environment variable or as a file (using a secret volume): YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never 1 Add a volumeMounts field to each container that needs the secret. 2 Specifies an unused directory name where you would like the secret to appear. Each key in the secret data map becomes the filename under mountPath . 3 Set to true . If true, this instructs the driver to provide a read-only volume. 4 Specifies the name of the secret. YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Specifies the environment variable that consumes the secret key. YAML of a build config populating environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest' 1 Specifies the environment variable that consumes the secret key. 2.4.2.1. Secret creation restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. Image pull secrets use service accounts for the automatic injection of the secret into all pods in a namespace. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to a Secret object. Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that could exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. 2.4.2.2. Creating an opaque secret As an administrator, you can create an opaque secret, which allows you to store unstructured key:value pairs that can contain arbitrary values. Procedure Create a Secret object in a YAML file. For example: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Use the following command to create a Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.4.2.3. Creating a legacy service account token secret As an administrator, you can create a legacy service account token secret, which allows you to distribute a service account token to applications that must authenticate to the API. Warning It is recommended to obtain bound service account tokens using the TokenRequest API instead of using legacy service account token secrets. You should create a service account token secret only if you cannot use the TokenRequest API and if the security exposure of a nonexpiring token in a readable API object is acceptable to you. Bound service account tokens are more secure than service account token secrets for the following reasons: Bound service account tokens have a bounded lifetime. Bound service account tokens contain audiences. Bound service account tokens can be bound to pods or secrets and the bound tokens are invalidated when the bound object is removed. Workloads are automatically injected with a projected volume to obtain a bound service account token. If your workload needs an additional service account token, add an additional projected volume in your workload manifest. For more information, see "Configuring bound service account tokens using volume projection". Procedure Create a Secret object in a YAML file: Example Secret object apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: "sa-name" 1 type: kubernetes.io/service-account-token 2 1 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 2 Specifies a service account token secret. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.4.2.4. Creating a basic authentication secret As an administrator, you can create a basic authentication secret, which allows you to store the credentials needed for basic authentication. When using this secret type, the data parameter of the Secret object must contain the following keys encoded in the base64 format: username : the user name for authentication password : the password or token for authentication Note You can use the stringData parameter to use clear text content. Procedure Create a Secret object in a YAML file: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password> 1 Specifies a basic authentication secret. 2 Specifies the basic authentication values to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.4.2.5. Creating an SSH authentication secret As an administrator, you can create an SSH authentication secret, which allows you to store data used for SSH authentication. When using this secret type, the data parameter of the Secret object must contain the SSH credential to use. Procedure Create a Secret object in a YAML file on a control plane node: Example secret object apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y ... 1 Specifies an SSH authentication secret. 2 Specifies the SSH key/value pair as the SSH credentials to use. Use the following command to create the Secret object: USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.4.2.6. Creating a Docker configuration secret As an administrator, you can create a Docker configuration secret, which allows you to store the credentials for accessing a container image registry. kubernetes.io/dockercfg . Use this secret type to store your local Docker configuration file. The data parameter of the secret object must contain the contents of a .dockercfg file encoded in the base64 format. kubernetes.io/dockerconfigjson . Use this secret type to store your local Docker configuration JSON file. The data parameter of the secret object must contain the contents of a .docker/config.json file encoded in the base64 format. Procedure Create a Secret object in a YAML file. Example Docker configuration secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration file. 2 The output of a base64-encoded Docker configuration file Example Docker configuration JSON secret object apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a Docker configuration JSONfile. 2 The output of a base64-encoded Docker configuration JSON file Use the following command to create the Secret object USD oc create -f <filename>.yaml To use the secret in a pod: Update the pod's service account to reference the secret, as shown in the "Understanding how to create secrets" section. Create the pod, which consumes the secret as an environment variable or as a file (using a secret volume), as shown in the "Understanding how to create secrets" section. Additional resources Understanding how to create secrets 2.4.2.7. Creating a secret using the web console You can create secrets using the web console. Procedure Navigate to Workloads Secrets . Click Create From YAML . Edit the YAML manually to your specifications, or drag and drop a file into the YAML editor. For example: apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com 1 This example specifies an opaque secret; however, you may see other secret types such as service account token secret, basic authentication secret, SSH authentication secret, or a secret that uses Docker configuration. 2 Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field. Click Create . Click Add Secret to workload . From the drop-down menu, select the workload to add. Click Save . 2.4.3. Understanding how to update secrets When you modify the value of a secret, the value (used by an already running pod) will not dynamically change. To change a secret, you must delete the original pod and create a new pod (perhaps with an identical PodSpec). Updating a secret follows the same workflow as deploying a new Container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods will report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 2.4.4. Creating and using secrets As an administrator, you can create a service account token secret. This allows you to distribute a service account token to applications that must authenticate to the API. Procedure Create a service account in your namespace by running the following command: USD oc create sa <service_account_name> -n <your_namespace> Save the following YAML example to a file named service-account-token-secret.yaml . The example includes a Secret object configuration that you can use to generate a service account token: apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: "sa-name" 2 type: kubernetes.io/service-account-token 3 1 Replace <secret_name> with the name of your service token secret. 2 Specifies an existing service account name. If you are creating both the ServiceAccount and the Secret objects, create the ServiceAccount object first. 3 Specifies a service account token secret type. Generate the service account token by applying the file: USD oc apply -f service-account-token-secret.yaml Get the service account token from the secret by running the following command: USD oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1 Example output ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA 1 Replace <sa_token_secret> with the name of your service token secret. Use your service account token to authenticate with the API of your cluster: USD curl -X GET <openshift_cluster_api> --header "Authorization: Bearer <token>" 1 2 1 Replace <openshift_cluster_api> with the OpenShift cluster API. 2 Replace <token> with the service account token that is output in the preceding command. 2.4.5. About using signed certificates with secrets To secure communication to your service, you can configure OpenShift Dedicated to generate a signed serving certificate/key pair that you can add into a secret in a project. A service serving certificate secret is intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Service Pod spec configured for a service serving certificates secret. apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1 # ... 1 Specify the name for the certificate Other pods can trust cluster-created certificates (which are only signed for internal DNS names), by using the CA bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 2.4.5.1. Generating signed certificates for use with secrets To use a signed serving certificate/key pair with a pod, create or edit the service to add the service.beta.openshift.io/serving-cert-secret-name annotation, then add the secret to the pod. Procedure To create a service serving certificate secret : Edit the Pod spec for your service. Add the service.beta.openshift.io/serving-cert-secret-name annotation with the name you want to use for your secret. kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. Create the service: USD oc create -f <file-name>.yaml View the secret to make sure it was created: View a list of all secrets: USD oc get secrets Example output NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m View details on your secret: USD oc describe secret my-cert Example output Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes Edit your Pod spec with that secret. apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: "/etc/my-path" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511 When it is available, your pod will run. The certificate will be good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. 2.4.6. Troubleshooting secrets If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service service.beta.openshift.io/serving-cert-generation-error , service.beta.openshift.io/serving-cert-generation-error-num : Delete the secret: USD oc delete secret <secret_name> Clear the annotations: USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed. 2.5. Creating and using config maps The following sections define config maps and how to create and use them. 2.5.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Dedicated, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Dedicated. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Dedicated node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. 2.5.2. Creating a config map in the OpenShift Dedicated web console You can create a config map in the OpenShift Dedicated web console. Procedure To create a config map as a cluster administrator: In the Administrator perspective, select Workloads Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . To create a config map as a developer: In the Developer perspective, select Config Maps . At the top right side of the page, select Create Config Map . Enter the contents of your config map. Select Create . 2.5.3. Creating a config map by using the CLI You can use the following command to create a config map from directories, specific files, or literal values. Procedure Create a config map: USD oc create configmap <configmap_name> [options] 2.5.3.1. Creating a config map from a directory You can create a config map from a directory by using the --from-file flag. This method allows you to use multiple files within a directory to create a config map. Each file in the directory is used to populate a key in the config map, where the name of the key is the file name, and the value of the key is the content of the file. For example, the following command creates a config map with the contents of the example-files directory: USD oc create configmap game-config --from-file=example-files/ View the keys in the config map: USD oc describe configmaps game-config Example output Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes You can see that the two keys in the map are created from the file names in the directory specified in the command. The content of those keys might be large, so the output of oc describe only shows the names of the keys and their sizes. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map holding the content of each file in this directory by entering the following command: USD oc create configmap game-config \ --from-file=example-files/ Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps game-config -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: "407" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985 2.5.3.2. Creating a config map from a file You can create a config map from a file by using the --from-file flag. You can pass the --from-file option multiple times to the CLI. You can also specify the key to set in a config map for content imported from a file by passing a key=value expression to the --from-file option. For example: USD oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties Note If you create a config map from a file, you can include files containing non-UTF8 data that are placed in this field without corrupting the non-UTF8 data. OpenShift Dedicated detects binary files and transparently encodes the file as MIME . On the server, the MIME payload is decoded and stored without corrupting the data. Prerequisite You must have a directory with files that contain the data you want to populate a config map with. The following procedure uses these example files: game.properties and ui.properties : USD cat example-files/game.properties Example output enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 USD cat example-files/ui.properties Example output color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice Procedure Create a config map by specifying a specific file: USD oc create configmap game-config-2 \ --from-file=example-files/game.properties \ --from-file=example-files/ui.properties Create a config map by specifying a key-value pair: USD oc create configmap game-config-3 \ --from-file=game-special-key=example-files/game.properties Verification Enter the oc get command for the object with the -o option to see the values of the keys from the file: USD oc get configmaps game-config-2 -o yaml Example output apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: "516" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985 Enter the oc get command for the object with the -o option to see the values of the keys from the key-value pair: USD oc get configmaps game-config-3 -o yaml Example output apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: "530" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985 1 This is the key that you set in the preceding step. 2.5.3.3. Creating a config map from literal values You can supply literal values for a config map. The --from-literal option takes a key=value syntax, which allows literal values to be supplied directly on the command line. Procedure Create a config map by specifying a literal value: USD oc create configmap special-config \ --from-literal=special.how=very \ --from-literal=special.type=charm Verification Enter the oc get command for the object with the -o option to see the values of the keys: USD oc get configmaps special-config -o yaml Example output apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: "651" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985 2.5.4. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 2.5.4.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 2.5.4.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 2.5.4.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: 2.6. Including pod priority in pod scheduling decisions You can enable pod priority and preemption in your cluster. Pod priority indicates the importance of a pod relative to other pods and queues the pods based on that priority. pod preemption allows the cluster to evict, or preempt, lower-priority pods so that higher-priority pods can be scheduled if there is no available space on a suitable node pod priority also affects the scheduling order of pods and out-of-resource eviction ordering on the node. To use priority and preemption, reference a priority class in the pod specification to apply that weight for scheduling. 2.6.1. Understanding pod priority When you use the Pod Priority and Preemption feature, the scheduler orders pending pods by their priority, and a pending pod is placed ahead of other pending pods with lower priority in the scheduling queue. As a result, the higher priority pod might be scheduled sooner than pods with lower priority if its scheduling requirements are met. If a pod cannot be scheduled, scheduler continues to schedule other lower priority pods. 2.6.1.1. Pod priority classes You can assign pods a priority class, which is a non-namespaced object that defines a mapping from a name to the integer value of the priority. The higher the value, the higher the priority. A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than or equal to one billion for critical pods that must not be preempted or evicted. By default, OpenShift Dedicated has two reserved priority classes for critical system pods to have guaranteed scheduling. USD oc get priorityclasses Example output NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s system-node-critical - This priority class has a value of 2000001000 and is used for all pods that should never be evicted from a node. Examples of pods that have this priority class are ovnkube-node , and so forth. A number of critical components include the system-node-critical priority class by default, for example: master-api master-controller master-etcd ovn-kubernetes sync system-cluster-critical - This priority class has a value of 2000000000 (two billion) and is used with pods that are important for the cluster. Pods with this priority class can be evicted from a node in certain circumstances. For example, pods configured with the system-node-critical priority class can take priority. However, this priority class does ensure guaranteed scheduling. Examples of pods that can have this priority class are fluentd, add-on components like descheduler, and so forth. A number of critical components include the system-cluster-critical priority class by default, for example: fluentd metrics-server descheduler openshift-user-critical - You can use the priorityClassName field with important pods that cannot bind their resource consumption and do not have predictable resource consumption behavior. Prometheus pods under the openshift-monitoring and openshift-user-workload-monitoring namespaces use the openshift-user-critical priorityClassName . Monitoring workloads use system-critical as their first priorityClass , but this causes problems when monitoring uses excessive memory and the nodes cannot evict them. As a result, monitoring drops priority to give the scheduler flexibility, moving heavy workloads around to keep critical nodes operating. cluster-logging - This priority is used by Fluentd to make sure Fluentd pods are scheduled to nodes over other apps. 2.6.1.2. Pod priority names After you have one or more priority classes, you can create pods that specify a priority class name in a Pod spec. The priority admission controller uses the priority class name field to populate the integer value of the priority. If the named priority class is not found, the pod is rejected. 2.6.2. Understanding pod preemption When a developer creates a pod, the pod goes into a queue. If the developer configured the pod for pod priority or preemption, the scheduler picks a pod from the queue and tries to schedule the pod on a node. If the scheduler cannot find space on an appropriate node that satisfies all the specified requirements of the pod, preemption logic is triggered for the pending pod. When the scheduler preempts one or more pods on a node, the nominatedNodeName field of higher-priority Pod spec is set to the name of the node, along with the nodename field. The scheduler uses the nominatedNodeName field to keep track of the resources reserved for pods and also provides information to the user about preemptions in the clusters. After the scheduler preempts a lower-priority pod, the scheduler honors the graceful termination period of the pod. If another node becomes available while scheduler is waiting for the lower-priority pod to terminate, the scheduler can schedule the higher-priority pod on that node. As a result, the nominatedNodeName field and nodeName field of the Pod spec might be different. Also, if the scheduler preempts pods on a node and is waiting for termination, and a pod with a higher-priority pod than the pending pod needs to be scheduled, the scheduler can schedule the higher-priority pod instead. In such a case, the scheduler clears the nominatedNodeName of the pending pod, making the pod eligible for another node. Preemption does not necessarily remove all lower-priority pods from a node. The scheduler can schedule a pending pod by removing a portion of the lower-priority pods. The scheduler considers a node for pod preemption only if the pending pod can be scheduled on the node. 2.6.2.1. Non-preempting priority classes Pods with the preemption policy set to Never are placed in the scheduling queue ahead of lower-priority pods, but they cannot preempt other pods. A non-preempting pod waiting to be scheduled stays in the scheduling queue until sufficient resources are free and it can be scheduled. Non-preempting pods, like other pods, are subject to scheduler back-off. This means that if the scheduler tries unsuccessfully to schedule these pods, they are retried with lower frequency, allowing other pods with lower priority to be scheduled before them. Non-preempting pods can still be preempted by other, high-priority pods. 2.6.2.2. Pod preemption and other scheduler settings If you enable pod priority and preemption, consider your other scheduler settings: Pod priority and pod disruption budget A pod disruption budget specifies the minimum number or percentage of replicas that must be up at a time. If you specify pod disruption budgets, OpenShift Dedicated respects them when preempting pods at a best effort level. The scheduler attempts to preempt pods without violating the pod disruption budget. If no such pods are found, lower-priority pods might be preempted despite their pod disruption budget requirements. Pod priority and pod affinity Pod affinity requires a new pod to be scheduled on the same node as other pods with the same label. If a pending pod has inter-pod affinity with one or more of the lower-priority pods on a node, the scheduler cannot preempt the lower-priority pods without violating the affinity requirements. In this case, the scheduler looks for another node to schedule the pending pod. However, there is no guarantee that the scheduler can find an appropriate node and pending pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. 2.6.2.3. Graceful termination of preempted pods When preempting a pod, the scheduler waits for the pod graceful termination period to expire, allowing the pod to finish working and exit. If the pod does not exit after the period, the scheduler kills the pod. This graceful termination period creates a time gap between the point that the scheduler preempts the pod and the time when the pending pod can be scheduled on the node. To minimize this gap, configure a small graceful termination period for lower-priority pods. 2.6.3. Configuring priority and preemption You apply pod priority and preemption by creating a priority class object and associating pods to the priority by using the priorityClassName in your pod specs. Note You cannot add a priority class directly to an existing scheduled pod. Procedure To configure your cluster to use priority and preemption: Define a pod spec to include the name of a priority class by creating a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: system-cluster-critical 1 1 Specify the priority class to use with this pod. Create the pod: USD oc create -f <file-name>.yaml You can add the priority name directly to the pod configuration or to a pod template. 2.7. Placing pods on specific nodes using node selectors A node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node. If you are using node affinity and node selectors in the same pod configuration, see the important considerations below. 2.7.1. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Dedicated schedules the pods on nodes that contain matching labels. You add labels to a node, a compute machine set, or a machine config. Adding the label to the compute machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: # ... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 # ... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # ... spec: nodeSelector: region: east type: user-node # ... Note You cannot add a node selector directly to an existing scheduled pod. | [
"kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi",
"oc project <project-name>",
"oc get pods",
"oc get pods",
"NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>",
"oc adm top pods",
"oc adm top pods -n openshift-console",
"NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi",
"oc adm top pod --selector=''",
"oc adm top pod --selector='name=my-pod'",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }",
"oc create -f <file_or_dir_path>",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod",
"apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod",
"oc create -f </path/to/file> -n <project_name>",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB",
"apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com",
"oc create sa <service_account_name> -n <your_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3",
"oc apply -f service-account-token-secret.yaml",
"oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1",
"ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA",
"curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2",
"apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1",
"kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f <file-name>.yaml",
"oc get secrets",
"NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m",
"oc describe secret my-cert",
"Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes",
"apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"oc create configmap <configmap_name> [options]",
"oc create configmap game-config --from-file=example-files/",
"oc describe configmaps game-config",
"Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config --from-file=example-files/",
"oc get configmaps game-config -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"cat example-files/game.properties",
"enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30",
"cat example-files/ui.properties",
"color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice",
"oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties",
"oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties",
"oc get configmaps game-config-2 -o yaml",
"apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985",
"oc get configmaps game-config-3 -o yaml",
"apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985",
"oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm",
"oc get configmaps special-config -o yaml",
"apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"oc get priorityclasses",
"NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s",
"apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent priorityClassName: system-cluster-critical 1",
"oc create -f <file-name>.yaml",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/nodes/working-with-pods |
Red Hat Quay architecture | Red Hat Quay architecture Red Hat Quay 3.13 Red Hat Quay Architecture Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html-single/red_hat_quay_architecture/index |
Chapter 5. Installing on Azure | Chapter 5. Installing on Azure 5.1. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 5.1.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. By default, the installation program distributes control plane and compute machines across all availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. OS Disk 7 VM OS disk must be able to sustain a minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by Standard_D8s_v3 , or other similar machine types available, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low read latency and high read IOPS and throughput. The reads performed from the cache, which is present either in the VM memory or in the local SSD disk, are much faster than the reads from the data disk, which is in the blob storage. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 6 65,536 per region Each default cluster requires six network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each default cluster Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the Internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. 5.1.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 5.1.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 5.1.4. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, your Azure account subscription must have the following roles: User Access Administrator Owner To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 5.1.5. Creating a service principal Because OpenShift Container Platform and its installation program must create Microsoft Azure resources through Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Install the jq package. Your Azure account has the required roles for the subscription that you use. Procedure Log in to the Azure CLI: USD az login Log in to Azure in the web console by using your credentials. If your Azure account uses subscriptions, ensure that you are using the right subscription. View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the UUID of the correct subscription. If you are not using the right subscription, change the active subscription: USD az account set -s <id> 1 1 Substitute the value of the id for the subscription that you want to use for <id> . If you changed the active subscription, display your account information again: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the values of the tenantId and id parameters from the output. You need these values during OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> 1 1 Replace <service_principal> with the name to assign to the service principal. Example output Changing "<service_principal>" to a valid URI of "http://<service_principal>", which is the required format used for service principal names Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 Retrying role assignment creation: 4/36 { "appId": "8bd0d04d-0ac2-43a8-928d-705c598c6956", "displayName": "<service_principal>", "name": "http://<service_principal>", "password": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "tenant": "6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Grant additional permissions to the service principal. You must always add the Contributor and User Access Administrator roles to the app registration service principal so the cluster can assign credentials for its components. To operate the Cloud Credential Operator (CCO) in mint mode , the app registration service principal also requires the Azure Active Directory Graph/Application.ReadWrite.OwnedBy API permission. To operate the CCO in passthrough mode , the app registration service principal does not require additional API permissions. For more information about CCO modes, see "About the Cloud Credential Operator" in the "Managing cloud provider credentials" section of the Authentication and authorization guide. To assign the User Access Administrator role, run the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp list --filter "appId eq '<appId>'" \ | jq '.[0].id' -r) 1 1 Replace <appId> with the appId parameter value for your service principal. To assign the Azure Active Directory Graph permission, run the following command: USD az ad app permission add --id <appId> \ 1 --api 00000002-0000-0000-c000-000000000000 \ --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role 1 Replace <appId> with the appId parameter value for your service principal. Example output Invoking "az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api 00000002-0000-0000-c000-000000000000" is needed to make the change effective For more information about the specific permissions that you grant with this command, see the GUID Table for Windows Azure Active Directory Permissions . Approve the permissions request. If your account does not have the Azure Active Directory tenant administrator role, follow the guidelines for your organization to request that the tenant administrator approve your permissions request. USD az ad app permission grant --id <appId> \ 1 --api 00000002-0000-0000-c000-000000000000 1 Replace <appId> with the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 5.1.6. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 5.1.7. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options. 5.2. Manually creating IAM for Azure In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 5.2.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator . 5.2.2. Manually create IAM The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Change to the directory that contains the installation program and create the install-config.yaml file: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. The format for the secret data varies for each cloud provider. From the directory that contains the installation program, proceed with your cluster creation: USD openshift-install create cluster --dir <installation_directory> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. For details, see the "Upgrading clusters with manually maintained credentials" section of the installation content for your cloud provider. 5.2.3. Upgrading clusters with manually maintained credentials If credentials are added in a future release, the Cloud Credential Operator (CCO) upgradable status for a cluster with manually maintained credentials changes to false . For minor release, for example, from 4.6 to 4.7, this status prevents you from upgrading until you have addressed any updated permissions. For z-stream releases, for example, from 4.6.10 to 4.6.11, the upgrade is not blocked, but the credentials must still be updated for the new release. Use the Administrator perspective of the web console to determine if the CCO is upgradeable. Navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , examine the CredentialsRequest custom resource for the new release and update the manually maintained credentials on your cluster to match before upgrading. In addition to creating new credentials for the release image that you are upgrading to, you must review the required permissions for existing credentials and accommodate any new permissions requirements for existing components in the new release. The CCO cannot detect these mismatches and will not set upgradable to false in this case. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. 5.2.4. Mint mode Mint mode is the default and recommended Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform. In this mode, the CCO uses the provided administrator-level cloud credential to run the cluster. Mint mode is supported for AWS, GCP, and Azure. In mint mode, the admin credential is stored in the kube-system namespace and then used by the CCO to process the CredentialsRequest objects in the cluster and create users for each with specific permissions. The benefits of mint mode include: Each cluster component has only the permissions it requires Automatic, on-going reconciliation for cloud credentials, including additional credentials or permissions that might be required for upgrades One drawback is that mint mode requires admin credential storage in a cluster kube-system secret. 5.2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure 5.3. Installing a cluster quickly on Azure In OpenShift Container Platform version 4.7, you can install a cluster on Microsoft Azure that uses the default configuration options. 5.3.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Configure an Azure account to host the cluster and determine the tested and validated region to deploy the cluster to. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. 5.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 5.3.3. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.3.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 5.3.6.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.3.6.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 5.3.6.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.3.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.3.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.3.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.4. Installing a cluster on Azure with customizations In OpenShift Container Platform version 4.7, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.4.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Configure an Azure account to host the cluster and determine the tested and validated region to deploy the cluster to. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. 5.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 5.4.3. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.4.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 5.4.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.4.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.4.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.4.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.4.5.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.4. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.4.5.2. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: centralus 11 baseDomainResourceGroupName: resource_group 12 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 13 fips: false 14 sshKey: ssh-ed25519 AAAA... 15 1 10 11 13 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 15 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.4.5.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.4.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.4.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 5.4.7.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.4.7.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 5.4.7.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.4.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.4.9. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.4.10. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.5. Installing a cluster on Azure with network customizations In OpenShift Container Platform version 4.7, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 5.5.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Configure an Azure account to host the cluster and determine the tested and validated region to deploy the cluster to. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. 5.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 5.5.3. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.5.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 5.5.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.5.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.5. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.5.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.6. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.5.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.7. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.5.5.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.8. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.5.5.2. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: centralus 12 baseDomainResourceGroupName: resource_group 13 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16 1 10 12 14 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.5.5.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.5.6. Network configuration phases When specifying a cluster configuration prior to installation, there are several phases in the installation procedures when you can modify the network configuration: Phase 1 After entering the openshift-install create install-config command. In the install-config.yaml file, you can customize the following network-related fields: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to "Installation configuration parameters". Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Phase 2 After entering the openshift-install create manifests command. If you must specify advanced network configuration, during this phase you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 5.5.7. Specifying advanced network configuration You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster. Important Modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and specify the advanced network configuration for your cluster, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. 5.5.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.5.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 This value is ready-only and specified in the install-config.yaml file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 This value is ready-only and specified in the install-config.yaml file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 5.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 5.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 5.13. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.5.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster. Important You must configure hybrid networking with OVN-Kubernetes cluster provider during the installation of your cluster. You cannot switch to hybrid networking after the installation process. In addition, the hybrid OVN-Kubernetes cluster network provider is a requirement for Windows Machine Config Operator (WMCO). Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 5.5.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.5.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 5.5.11.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.5.11.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 5.5.11.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.5.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.5.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.5.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.6. Installing a cluster on Azure into an existing VNet In OpenShift Container Platform version 4.7, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.6.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Configure an Azure account to host the cluster and determine the tested and validated region to deploy the cluster to. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. 5.6.2. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.7, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 5.6.2.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 5.6.2.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 5.14. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Note Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. 5.6.2.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 5.6.2.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 5.6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 5.6.4. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.6.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 5.6.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.6.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.15. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.6.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.16. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.6.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.17. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.6.6.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.18. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.6.6.2. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: centralus 11 baseDomainResourceGroupName: resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 1 10 11 17 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 Specify the name of the resource group that contains the DNS zone for your base domain. 13 If you use an existing VNet, specify the name of the resource group that contains it. 14 If you use an existing VNet, specify its name. 15 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 16 If you use an existing VNet, specify the name of the subnet to host the compute machines. 18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 19 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.6.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.6.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.6.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 5.6.8.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.6.8.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 5.6.8.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.6.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.6.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.6.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.7. Installing a private cluster on Azure In OpenShift Container Platform version 4.7, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.7.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Configure an Azure account to host the cluster and determine the tested and validated region to deploy the cluster to. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. 5.7.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the Internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 5.7.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to Internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 5.7.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 5.7.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the Internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the Internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the Internet is possible to pull container images, unless using an internal registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for Internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound Internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the Internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no Internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An internal registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 5.7.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.7, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 5.7.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 5.7.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 5.19. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Note Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. 5.7.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 5.7.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 5.7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 5.7.5. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.7.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.7.7. Manually creating the installation configuration file For installations of a private OpenShift Container Platform cluster that are only accessible from an internal network and are not visible to the Internet, you must manually generate your installation configuration file. Prerequisites Obtain the OpenShift Container Platform installation program and the access token for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the following install-config.yaml file template and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 5.7.7.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.7.7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.20. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.7.7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.21. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.7.7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.22. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.7.7.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.23. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.7.7.2. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: centralus 11 baseDomainResourceGroupName: resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 outboundType: UserDefinedRouting 17 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20 publish: Internal 21 1 10 11 18 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 12 Specify the name of the resource group that contains the DNS zone for your base domain. 13 If you use an existing VNet, specify the name of the resource group that contains it. 14 If you use an existing VNet, specify its name. 15 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 16 If you use an existing VNet, specify the name of the subnet to host the compute machines. 17 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 20 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 21 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the Internet. The default value is External . 5.7.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.7.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.7.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 5.7.9.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.7.9.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 5.7.9.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.7.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.7.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.7.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.8. Installing a cluster on Azure into a government region In OpenShift Container Platform version 4.7, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster. 5.8.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Configure an Azure account to host the cluster and determine the tested and validated government region to deploy the cluster to. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. 5.8.2. Azure government regions OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization . Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment. Note The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud , based on the region specified. 5.8.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the Internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 5.8.3.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to Internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 5.8.3.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 5.8.3.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the Internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the Internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the Internet is possible to pull container images, unless using an internal registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for Internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound Internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the Internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no Internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An internal registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 5.8.4. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.7, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 5.8.4.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 5.8.4.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 5.24. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Note Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. 5.8.4.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 5.8.4.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 5.8.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 5.8.6. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.8.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.8.8. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure into a government region, you must manually generate your installation configuration file. Prerequisites Obtain the OpenShift Container Platform installation program and the access token for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the following install-config.yaml file template and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 5.8.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 5.8.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.25. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.8.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.26. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 5.8.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.27. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 5.8.8.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.28. Additional Azure parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 5.8.8.2. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: usgovvirginia baseDomainResourceGroupName: resource_group 11 networkResourceGroupName: vnet_resource_group 12 virtualNetwork: vnet 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 outboundType: UserDefinedRouting 16 cloudName: AzureUSGovernmentCloud 17 pullSecret: '{"auths": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20 publish: Internal 21 1 10 18 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 Specify the name of the resource group that contains the DNS zone for your base domain. 12 If you use an existing VNet, specify the name of the resource group that contains it. 13 If you use an existing VNet, specify its name. 14 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 15 If you use an existing VNet, specify the name of the subnet to host the compute machines. 16 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 17 Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud . 19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 20 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 21 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the Internet. The default value is External . 5.8.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 5.8.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 5.8.10.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.8.10.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 5.8.10.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.8.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.8.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.8.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 5.9. Installing a cluster on Azure using ARM templates In OpenShift Container Platform version 4.7, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 5.9.1. Prerequisites Review details about the OpenShift Container Platform installation and update processes. Configure an Azure account to host the cluster. Download the Azure CLI and install it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was last tested using version 2.2.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to. If you do not allow the system to manage identity and access management (IAM), then a cluster administrator can manually create and maintain IAM credentials . Manual mode can also be used in environments where the cloud IAM APIs are not reachable. Note Be sure to also review this site list if you are configuring a proxy. 5.9.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 5.9.3. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 5.9.3.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. By default, the installation program distributes control plane and compute machines across all availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. OS Disk 7 VM OS disk must be able to sustain a minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by Standard_D8s_v3 , or other similar machine types available, and the target of 5000 IOPS, at least a P30 disk is required. Host caching must be set to ReadOnly for low read latency and high read IOPS and throughput. The reads performed from the cache, which is present either in the VM memory or in the local SSD disk, are much faster than the reads from the data disk, which is in the blob storage. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 6 65,536 per region Each default cluster requires six network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each default cluster Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the Internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. 5.9.3.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 5.9.3.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 5.9.3.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 5.9.3.5. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, your Azure account subscription must have the following roles: User Access Administrator Owner To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 5.9.3.6. Creating a service principal Because OpenShift Container Platform and its installation program must create Microsoft Azure resources through Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Install the jq package. Your Azure account has the required roles for the subscription that you use. Procedure Log in to the Azure CLI: USD az login Log in to Azure in the web console by using your credentials. If your Azure account uses subscriptions, ensure that you are using the right subscription. View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the UUID of the correct subscription. If you are not using the right subscription, change the active subscription: USD az account set -s <id> 1 1 Substitute the value of the id for the subscription that you want to use for <id> . If you changed the active subscription, display your account information again: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the values of the tenantId and id parameters from the output. You need these values during OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> 1 1 Replace <service_principal> with the name to assign to the service principal. Example output Changing "<service_principal>" to a valid URI of "http://<service_principal>", which is the required format used for service principal names Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 Retrying role assignment creation: 4/36 { "appId": "8bd0d04d-0ac2-43a8-928d-705c598c6956", "displayName": "<service_principal>", "name": "http://<service_principal>", "password": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "tenant": "6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Grant additional permissions to the service principal. You must always add the Contributor and User Access Administrator roles to the app registration service principal so the cluster can assign credentials for its components. To operate the Cloud Credential Operator (CCO) in mint mode , the app registration service principal also requires the Azure Active Directory Graph/Application.ReadWrite.OwnedBy API permission. To operate the CCO in passthrough mode , the app registration service principal does not require additional API permissions. For more information about CCO modes, see "About the Cloud Credential Operator" in the "Managing cloud provider credentials" section of the Authentication and authorization guide. To assign the User Access Administrator role, run the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp list --filter "appId eq '<appId>'" \ | jq '.[0].id' -r) 1 1 Replace <appId> with the appId parameter value for your service principal. To assign the Azure Active Directory Graph permission, run the following command: USD az ad app permission add --id <appId> \ 1 --api 00000002-0000-0000-c000-000000000000 \ --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role 1 Replace <appId> with the appId parameter value for your service principal. Example output Invoking "az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api 00000002-0000-0000-c000-000000000000" is needed to make the change effective For more information about the specific permissions that you grant with this command, see the GUID Table for Windows Azure Active Directory Permissions . Approve the permissions request. If your account does not have the Azure Active Directory tenant administrator role, follow the guidelines for your organization to request that the tenant administrator approve your permissions request. USD az ad app permission grant --id <appId> \ 1 --api 00000002-0000-0000-c000-000000000000 1 Replace <appId> with the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 5.9.3.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 5.9.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.9.5. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster's machines. 5.9.6. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 5.9.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers , the unit must be named var-lib-containers.mount . 5 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 5.9.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Optional: If you do not want the cluster to provision compute machines, empty the compute pool by editing the resulting install-config.yaml file to set replicas to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 5.9.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.9.6.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 5.9.6.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. The following files are generated in the directory: 5.9.7. Creating the Azure resource group and identity You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" 5.9.8. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally; therefore, you must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Choose the RHCOS version to use and export the URL of its VHD to an environment variable: USD export VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.7/data/data/rhcos.json | jq -r .azure.url` Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Copy the chosen VHD to a blob: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" To track the progress of the VHD copy task, run this script: Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --public-access blob USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 5.9.9. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 5.9.10. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 5.9.10.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 5.1. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 5.9.11. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.9.11.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 5.2. 02_storage.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vhdBlobURL" : { "type" : "string", "metadata" : { "description" : "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables" : { "location" : "[resourceGroup().location]", "imageName" : "[concat(parameters('baseName'), '-image')]" }, "resources" : [ { "apiVersion" : "2018-06-01", "type": "Microsoft.Compute/images", "name": "[variables('imageName')]", "location" : "[variables('location')]", "properties": { "storageProfile": { "osDisk": { "osType": "Linux", "osState": "Generalized", "blobUri": "[parameters('vhdBlobURL')]", "storageAccountType": "Standard_LRS" } } } } ] } 5.9.12. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require either a DHCP server or that static IP addresses be set on each host in the cluster to establish a network connection, which allows them to download their Ignition config files. It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Table 5.29. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 5.30. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 5.31. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 5.32. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 5.33. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 5.9.13. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 5.9.13.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 5.3. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "public-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 5.9.14. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the following variables required by the bootstrap machine deployment: USD export BOOTSTRAP_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -n "bootstrap.ign" -o tsv` USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters sshKeyData="USD{SSH_KEY}" \ 2 --parameters baseName="USD{INFRA_ID}" 3 1 The bootstrap Ignition content for the bootstrap cluster. 2 The SSH RSA public key file as a string. 3 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.9.14.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 5.4. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "metadata" : { "description" : "SSH RSA public key file as a string." } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "allowedValues" : [ "Standard_A2", "Standard_A3", "Standard_A4", "Standard_A5", "Standard_A6", "Standard_A7", "Standard_A8", "Standard_A9", "Standard_A10", "Standard_A11", "Standard_D2", "Standard_D3", "Standard_D4", "Standard_D11", "Standard_D12", "Standard_D13", "Standard_D14", "Standard_D2_v2", "Standard_D3_v2", "Standard_D4_v2", "Standard_D5_v2", "Standard_D8_v3", "Standard_D11_v2", "Standard_D12_v2", "Standard_D13_v2", "Standard_D14_v2", "Standard_E2_v3", "Standard_E4_v3", "Standard_E8_v3", "Standard_E16_v3", "Standard_E32_v3", "Standard_E64_v3", "Standard_E2s_v3", "Standard_E4s_v3", "Standard_E8s_v3", "Standard_E16s_v3", "Standard_E32s_v3", "Standard_E64s_v3", "Standard_G1", "Standard_G2", "Standard_G3", "Standard_G4", "Standard_G5", "Standard_DS2", "Standard_DS3", "Standard_DS4", "Standard_DS11", "Standard_DS12", "Standard_DS13", "Standard_DS14", "Standard_DS2_v2", "Standard_DS3_v2", "Standard_DS4_v2", "Standard_DS5_v2", "Standard_DS11_v2", "Standard_DS12_v2", "Standard_DS13_v2", "Standard_DS14_v2", "Standard_GS1", "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5", "Standard_D2s_v3", "Standard_D4s_v3", "Standard_D8s_v3" ], "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : true, "ssh" : { "publicKeys" : [ { "path" : "[variables('sshKeyPath')]", "keyData" : "[parameters('sshKeyData')]" } ] } } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 5.9.15. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters sshKeyData="USD{SSH_KEY}" \ 2 --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 3 --parameters baseName="USD{INFRA_ID}" 4 1 The Ignition content for the control plane nodes (also known as the master nodes). 2 The SSH RSA public key file as a string. 3 The name of the private DNS zone to which the control plane nodes are attached. 4 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.9.15.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 5.5. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "metadata" : { "description" : "SSH RSA public key file as a string" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone the master nodes are going to be attached to" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "allowedValues" : [ "Standard_A2", "Standard_A3", "Standard_A4", "Standard_A5", "Standard_A6", "Standard_A7", "Standard_A8", "Standard_A9", "Standard_A10", "Standard_A11", "Standard_D2", "Standard_D3", "Standard_D4", "Standard_D11", "Standard_D12", "Standard_D13", "Standard_D14", "Standard_D2_v2", "Standard_D3_v2", "Standard_D4_v2", "Standard_D5_v2", "Standard_D8_v3", "Standard_D11_v2", "Standard_D12_v2", "Standard_D13_v2", "Standard_D14_v2", "Standard_E2_v3", "Standard_E4_v3", "Standard_E8_v3", "Standard_E16_v3", "Standard_E32_v3", "Standard_E64_v3", "Standard_E2s_v3", "Standard_E4s_v3", "Standard_E8s_v3", "Standard_E16s_v3", "Standard_E32s_v3", "Standard_E64s_v3", "Standard_G1", "Standard_G2", "Standard_G3", "Standard_G4", "Standard_G5", "Standard_DS2", "Standard_DS3", "Standard_DS4", "Standard_DS11", "Standard_DS12", "Standard_DS13", "Standard_DS14", "Standard_DS2_v2", "Standard_DS3_v2", "Standard_DS4_v2", "Standard_DS5_v2", "Standard_DS11_v2", "Standard_DS12_v2", "Standard_DS13_v2", "Standard_DS14_v2", "Standard_GS1", "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5", "Standard_D2s_v3", "Standard_D4s_v3", "Standard_D8s_v3" ], "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/SRV", "name": "[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]", "location" : "[variables('location')]", "properties": { "ttl": 60, "copy": [{ "name": "srvRecords", "count": "[length(variables('vmNames'))]", "input": { "priority": 0, "weight" : 10, "port" : 2380, "target" : "[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]" } }] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "copy" : { "name" : "dnsCopy", "count" : "[length(variables('vmNames'))]" }, "name": "[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]", "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]", "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : true, "ssh" : { "publicKeys" : [ { "path" : "[variables('sshKeyPath')]", "keyData" : "[parameters('sshKeyData')]" } ] } } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 5.9.16. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 5.9.17. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note If you do not use the provided ARM template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters sshKeyData="USD{SSH_KEY}" \ 2 --parameters baseName="USD{INFRA_ID}" 3 1 The Ignition content for the worker nodes. 2 The SSH RSA public key file as a string. 3 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.9.17.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 5.6. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "metadata" : { "description" : "SSH RSA public key file as a string" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "allowedValues" : [ "Standard_A2", "Standard_A3", "Standard_A4", "Standard_A5", "Standard_A6", "Standard_A7", "Standard_A8", "Standard_A9", "Standard_A10", "Standard_A11", "Standard_D2", "Standard_D3", "Standard_D4", "Standard_D11", "Standard_D12", "Standard_D13", "Standard_D14", "Standard_D2_v2", "Standard_D3_v2", "Standard_D4_v2", "Standard_D5_v2", "Standard_D8_v3", "Standard_D11_v2", "Standard_D12_v2", "Standard_D13_v2", "Standard_D14_v2", "Standard_E2_v3", "Standard_E4_v3", "Standard_E8_v3", "Standard_E16_v3", "Standard_E32_v3", "Standard_E64_v3", "Standard_E2s_v3", "Standard_E4s_v3", "Standard_E8s_v3", "Standard_E16s_v3", "Standard_E32s_v3", "Standard_E64s_v3", "Standard_G1", "Standard_G2", "Standard_G3", "Standard_G4", "Standard_G5", "Standard_DS2", "Standard_DS3", "Standard_DS4", "Standard_DS11", "Standard_DS12", "Standard_DS13", "Standard_DS14", "Standard_DS2_v2", "Standard_DS3_v2", "Standard_DS4_v2", "Standard_DS5_v2", "Standard_DS11_v2", "Standard_DS12_v2", "Standard_DS13_v2", "Standard_DS14_v2", "Standard_GS1", "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5", "Standard_D2s_v3", "Standard_D4s_v3", "Standard_D8s_v3" ], "metadata" : { "description" : "The size of the each Node Virtual Machine" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "imageName" : "[concat(parameters('baseName'), '-image')]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : true, "ssh" : { "publicKeys" : [ { "path" : "[variables('sshKeyPath')]", "keyData" : "[parameters('sshKeyData')]" } ] } } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 5.9.18. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 5.9.18.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.9.18.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 5.9.18.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.9.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.9.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.9.21. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install the jq package. Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com grafana-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 5.9.22. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.9.23. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.10. Uninstalling a cluster on Azure You can remove a cluster that you deployed to Microsoft Azure. 5.10.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> 1",
"Changing \"<service_principal>\" to a valid URI of \"http://<service_principal>\", which is the required format used for service principal names Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 Retrying role assignment creation: 4/36 { \"appId\": \"8bd0d04d-0ac2-43a8-928d-705c598c6956\", \"displayName\": \"<service_principal>\", \"name\": \"http://<service_principal>\", \"password\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"tenant\": \"6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp list --filter \"appId eq '<appId>'\" | jq '.[0].id' -r) 1",
"az ad app permission add --id <appId> \\ 1 --api 00000002-0000-0000-c000-000000000000 --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role",
"Invoking \"az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api 00000002-0000-0000-c000-000000000000\" is needed to make the change effective",
"az ad app permission grant --id <appId> \\ 1 --api 00000002-0000-0000-c000-000000000000",
"openshift-install create install-config --dir <installation_directory>",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"openshift-install create cluster --dir <installation_directory>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: centralus 11 baseDomainResourceGroupName: resource_group 12 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 13 fips: false 14 sshKey: ssh-ed25519 AAAA... 15",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: centralus 12 baseDomainResourceGroupName: resource_group 13 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: centralus 11 baseDomainResourceGroupName: resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: centralus 11 baseDomainResourceGroupName: resource_group 12 networkResourceGroupName: vnet_resource_group 13 virtualNetwork: vnet 14 controlPlaneSubnet: control_plane_subnet 15 computeSubnet: compute_subnet 16 outboundType: UserDefinedRouting 17 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20 publish: Internal 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: type: Standard_D2s_v3 osDisk: diskSizeGB: 512 8 diskType: Standard_LRS zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: region: usgovvirginia baseDomainResourceGroupName: resource_group 11 networkResourceGroupName: vnet_resource_group 12 virtualNetwork: vnet 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 outboundType: UserDefinedRouting 16 cloudName: AzureUSGovernmentCloud 17 pullSecret: '{\"auths\": ...}' 18 fips: false 19 sshKey: ssh-ed25519 AAAA... 20 publish: Internal 21",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> 1",
"Changing \"<service_principal>\" to a valid URI of \"http://<service_principal>\", which is the required format used for service principal names Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 Retrying role assignment creation: 4/36 { \"appId\": \"8bd0d04d-0ac2-43a8-928d-705c598c6956\", \"displayName\": \"<service_principal>\", \"name\": \"http://<service_principal>\", \"password\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"tenant\": \"6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp list --filter \"appId eq '<appId>'\" | jq '.[0].id' -r) 1",
"az ad app permission add --id <appId> \\ 1 --api 00000002-0000-0000-c000-000000000000 --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role",
"Invoking \"az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api 00000002-0000-0000-c000-000000000000\" is needed to make the change effective",
"az ad app permission grant --id <appId> \\ 1 --api 00000002-0000-0000-c000-000000000000",
"tar xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.7/data/data/rhcos.json | jq -r .azure.url`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"status=\"unknown\" while [ \"USDstatus\" != \"success\" ] do status=`az storage blob show --container-name vhd --name \"rhcos.vhd\" --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv --query properties.copy.status` echo USDstatus done",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --public-access blob",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vhdBlobURL\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Compute/images\", \"name\": \"[variables('imageName')]\", \"location\" : \"[variables('location')]\", \"properties\": { \"storageProfile\": { \"osDisk\": { \"osType\": \"Linux\", \"osState\": \"Generalized\", \"blobUri\": \"[parameters('vhdBlobURL')]\", \"storageAccountType\": \"Standard_LRS\" } } } } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"public-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"export BOOTSTRAP_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -n \"bootstrap.ign\" -o tsv` export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" 3",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string.\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 3 --parameters baseName=\"USD{INFRA_ID}\" 4",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone the master nodes are going to be attached to\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[concat(parameters('baseName'), '-public-lb')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/SRV\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]\", \"location\" : \"[variables('location')]\", \"properties\": { \"ttl\": 60, \"copy\": [{ \"name\": \"srvRecords\", \"count\": \"[length(variables('vmNames'))]\", \"input\": { \"priority\": 0, \"weight\" : 10, \"port\" : 2380, \"target\" : \"[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]\" } }] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"copy\" : { \"name\" : \"dnsCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\": \"[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]\", \"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters sshKeyData=\"USD{SSH_KEY}\" \\ 2 --parameters baseName=\"USD{INFRA_ID}\" 3",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"metadata\" : { \"description\" : \"SSH RSA public key file as a string\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"allowedValues\" : [ \"Standard_A2\", \"Standard_A3\", \"Standard_A4\", \"Standard_A5\", \"Standard_A6\", \"Standard_A7\", \"Standard_A8\", \"Standard_A9\", \"Standard_A10\", \"Standard_A11\", \"Standard_D2\", \"Standard_D3\", \"Standard_D4\", \"Standard_D11\", \"Standard_D12\", \"Standard_D13\", \"Standard_D14\", \"Standard_D2_v2\", \"Standard_D3_v2\", \"Standard_D4_v2\", \"Standard_D5_v2\", \"Standard_D8_v3\", \"Standard_D11_v2\", \"Standard_D12_v2\", \"Standard_D13_v2\", \"Standard_D14_v2\", \"Standard_E2_v3\", \"Standard_E4_v3\", \"Standard_E8_v3\", \"Standard_E16_v3\", \"Standard_E32_v3\", \"Standard_E64_v3\", \"Standard_E2s_v3\", \"Standard_E4s_v3\", \"Standard_E8s_v3\", \"Standard_E16s_v3\", \"Standard_E32s_v3\", \"Standard_E64s_v3\", \"Standard_G1\", \"Standard_G2\", \"Standard_G3\", \"Standard_G4\", \"Standard_G5\", \"Standard_DS2\", \"Standard_DS3\", \"Standard_DS4\", \"Standard_DS11\", \"Standard_DS12\", \"Standard_DS13\", \"Standard_DS14\", \"Standard_DS2_v2\", \"Standard_DS3_v2\", \"Standard_DS4_v2\", \"Standard_DS5_v2\", \"Standard_DS11_v2\", \"Standard_DS12_v2\", \"Standard_DS13_v2\", \"Standard_DS14_v2\", \"Standard_GS1\", \"Standard_GS2\", \"Standard_GS3\", \"Standard_GS4\", \"Standard_GS5\", \"Standard_D2s_v3\", \"Standard_D4s_v3\", \"Standard_D8s_v3\" ], \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"imageName\" : \"[concat(parameters('baseName'), '-image')]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : true, \"ssh\" : { \"publicKeys\" : [ { \"path\" : \"[variables('sshKeyPath')]\", \"keyData\" : \"[parameters('sshKeyData')]\" } ] } } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/images', variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com grafana-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/installing/installing-on-azure |
Chapter 2. RPM topologies | Chapter 2. RPM topologies The RPM installer deploys Ansible Automation Platform on Red Hat Enterprise Linux by using RPMs to install the platform on host machines. Customers manage the product and infrastructure lifecycle. 2.1. RPM growth topology The growth topology is intended for organizations that are getting started with Ansible Automation Platform and do not require redundancy or higher compute for large volumes of automation. This topology allows for smaller footprint deployments. 2.1.1. Infrastructure topology The following diagram outlines the infrastructure topology that Red Hat has tested with this deployment model that customers can use when self-managing Ansible Automation Platform: Figure 2.1. Infrastructure topology diagram Each virtual machine (VM) has been tested with the following component requirements: 16 GB RAM, 4 CPUs, 60 GB local disk, and 3000 IOPS. Table 2.1. Infrastructure topology VM count Purpose Example VM group names 1 Platform gateway with colocated Redis automationgateway 1 Automation controller automationcontroller 1 Private automation hub automationhub 1 Event-Driven Ansible automationedacontroller 1 Automation mesh execution node execution_nodes 1 Database database 2.1.2. Tested system configurations Red Hat has tested the following configurations to install and run Red Hat Ansible Automation Platform: Table 2.2. Tested system configurations Type Description Subscription Valid Red Hat Ansible Automation Platform subscription Operating system Red Hat Enterprise Linux 8.8 or later minor versions of Red Hat Enterprise Linux 8. Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9. CPU architecture x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) Ansible-core Ansible-core version 2.16 or later Browser A currently supported version of Mozilla Firefox or Google Chrome Database PostgreSQL 15 2.1.3. Network ports Red Hat Ansible Automation Platform uses several ports to communicate with its services. These ports must be open and available for incoming connections to the Red Hat Ansible Automation Platform server for it to work. Ensure that these ports are available and are not blocked by the server firewall. Table 2.3. Network ports and protocols Port number Protocol Service Source Destination 80/443 TCP HTTP/HTTPS Event-Driven Ansible Automation hub 80/443 TCP HTTP/HTTPS Event-Driven Ansible Automation controller 80/443 TCP HTTP/HTTPS Automation controller Automation hub 80/443 TCP HTTP/HTTPS Platform gateway Automation controller 80/443 TCP HTTP/HTTPS Platform gateway Automation hub 80/443 TCP HTTP/HTTPS Platform gateway Event-Driven Ansible 5432 TCP PostgreSQL Event-Driven Ansible Database 5432 TCP PostgreSQL Platform gateway Database 5432 TCP PostgreSQL Automation hub Database 5432 TCP PostgreSQL Automation controller Database 6379 TCP Redis Event-Driven Ansible Redis node 6379 TCP Redis Platform gateway Redis node 8443 TCP HTTPS Platform gateway Platform gateway 27199 TCP Receptor Automation controller Execution node 2.1.4. Example inventory file Use the example inventory file to perform an installation for this topology: # This is the Ansible Automation Platform installer inventory file intended for the RPM growth deployment topology. # Consult the Ansible Automation Platform product documentation about this topology's tested hardware configuration. # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/rpm-topologies # # Consult the docs if you are unsure what to add # For all optional variables consult the Ansible Automation Platform documentation: # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation # This section is for your platform gateway hosts # ----------------------------------------------------- [automationgateway] gateway.example.org # This section is for your automation controller hosts # ----------------------------------------------------- [automationcontroller] controller.example.org [automationcontroller:vars] peers=execution_nodes # This section is for your Ansible Automation Platform execution hosts # ----------------------------------------------------- [execution_nodes] exec.example.org # This section is for your automation hub hosts # ----------------------------------------------------- [automationhub] hub.example.org # This section is for your Event-Driven Ansible controller hosts # ----------------------------------------------------- [automationedacontroller] eda.example.org # This section is for the Ansible Automation Platform database # ----------------------------------------------------- [database] db.example.org [all:vars] # Common variables # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-general-inventory-variables # ----------------------------------------------------- registry_username=<your RHN username> registry_password=<your RHN password> redis_mode=standalone # Platform gateway # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-gateway-variables # ----------------------------------------------------- automationgateway_admin_password=<set your own> automationgateway_pg_host=db.example.org automationgateway_pg_password=<set your own> # Automation controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-controller-variables # ----------------------------------------------------- admin_password=<set your own> pg_host=db.example.org pg_password=<set your own> # Automation hub # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-hub-variables # ----------------------------------------------------- automationhub_admin_password=<set your own> automationhub_pg_host=db.example.org automationhub_pg_password=<set your own> # Event-Driven Ansible controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#event-driven-ansible-controller # ----------------------------------------------------- automationedacontroller_admin_password=<set your own> automationedacontroller_pg_host=db.example.org automationedacontroller_pg_password=<set your own> 2.2. RPM mixed growth topology The growth topology is intended for organizations that are getting started with Ansible Automation Platform and do not require redundancy or higher compute for large volumes of automation. This topology allows for smaller footprint deployments. The mixed topology has different versions of Ansible Automation Platform intended for configuring a new installation of Event-Driven Ansible 1.1 with automation controller 4.4 or 4.5. 2.2.1. Infrastructure topology The following diagram outlines the infrastructure topology that Red Hat has tested with this deployment model that customers can use when self-managing Ansible Automation Platform: Figure 2.2. Infrastructure topology diagram Note Here, automation controller and automation hub are at 2.4x while the Event-Driven Ansible and platform gateway components are at 2.5 Each virtual machine (VM) has been tested with the following component requirements: 16 GB RAM, 4 CPUs, 60 GB local disk, and 3000 IOPS. Table 2.4. Infrastructure topology VM count Purpose Ansible Automation Platform version Example VM group names 1 Platform gateway with colocated Redis 2.5 automationgateway 1 Automation controller 2.4 automationcontroller 1 Private automation hub 2.4 automationhub 1 Event-Driven Ansible 2.5 automationedacontroller 1 Automation mesh execution node 2.4 execution_nodes 1 Database 2.4 database 2.2.2. Tested system configurations Red Hat has tested the following configurations to install and run Red Hat Ansible Automation Platform: Table 2.5. Tested system configurations Type Description Subscription Valid Red Hat Ansible Automation Platform subscription Operating system Red Hat Enterprise Linux 8.8 or later minor versions of Red Hat Enterprise Linux 8. Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9. CPU architecture x86_64, AArch64 Ansible-core Ansible-core version 2.16 or later Browser A currently supported version of Mozilla Firefox or Google Chrome Database PostgreSQL 15 2.2.3. Network ports Red Hat Ansible Automation Platform uses several ports to communicate with its services. These ports must be open and available for incoming connections to the Red Hat Ansible Automation Platform server for it to work. Ensure that these ports are available and are not blocked by the server firewall. Table 2.6. Network ports and protocols Port number Protocol Service Source Destination 80/443 TCP HTTP/HTTPS Event-Driven Ansible Automation hub 80/443 TCP HTTP/HTTPS Event-Driven Ansible Automation controller 80/443 TCP HTTP/HTTPS Automation controller Automation hub 80/443 TCP HTTP/HTTPS Platform gateway Automation controller 80/443 TCP HTTP/HTTPS Platform gateway Automation hub 80/443 TCP HTTP/HTTPS Platform gateway Event-Driven Ansible 5432 TCP PostgreSQL Event-Driven Ansible Database 5432 TCP PostgreSQL Platform gateway Database 5432 TCP PostgreSQL Automation hub Database 5432 TCP PostgreSQL Automation controller Database 6379 TCP Redis Event-Driven Ansible Redis node 6379 TCP Redis Platform gateway Redis node 8443 TCP HTTPS Platform gateway Platform gateway 27199 TCP Receptor Automation controller Execution node 2.2.4. Example inventory file Use the example inventory file to perform an installation for this topology: # This is the Ansible Automation Platform installer inventory file intended for the mixed RPM growth deployment topology. # Consult the Ansible Automation Platform product documentation about this topology's tested hardware configuration. # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/rpm-topologies # # Consult the docs if you are unsure what to add # For all optional variables consult the Red Hat documentation: # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation # This section is for your platform gateway hosts # ----------------------------------------------------- [automationgateway] gateway.example.org # This section is for your Event-Driven Ansible controller hosts # ----------------------------------------------------- [automationedacontroller] eda.example.org [all:vars] # Common variables # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-general-inventory-variables # ----------------------------------------------------- registry_username=<your RHN username> registry_password=<your RHN password> redis_mode=standalone # Platform gateway # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-gateway-variables # ----------------------------------------------------- automationgateway_admin_password=<set your own> automationgateway_pg_host=db.example.org automationgateway_pg_password=<set your own> # Event-Driven Ansible controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#event-driven-ansible-controller # ----------------------------------------------------- automationedacontroller_admin_password=<set your own> automationedacontroller_pg_host=db.example.org automationedacontroller_pg_password=<set your own> 2.3. RPM enterprise topology The enterprise topology is intended for organizations that require Ansible Automation Platform to be deployed with redundancy or higher compute for large volumes of automation. 2.3.1. Infrastructure topology The following diagram outlines the infrastructure topology that Red Hat has tested with this deployment model that customers can use when self-managing Ansible Automation Platform: Figure 2.3. Infrastructure topology diagram Each virtual machine (VM) has been tested with the following component requirements: 16 GB RAM, 4 CPUs, 60 GB local disk, and 3000 IOPS. Table 2.7. Infrastructure topology VM count Purpose Example VM group names 2 Platform gateway with colocated Redis automationgateway 2 Automation controller automationcontroller 2 Private automation hub with colocated Redis automationhub 2 Event-Driven Ansible with colocated Redis automationedacontroller 1 Automation mesh hop node execution_nodes 2 Automation mesh execution node execution_nodes 1 Externally managed database service N/A 1 HAProxy load balancer in front of platform gateway (externally managed) N/A Note 6 VMs are required for a Redis high availability (HA) compatible deployment. Redis can be colocated on each Ansible Automation Platform component VM except for automation controller, execution nodes, or the PostgreSQL database. 2.3.2. Tested system configurations Red Hat has tested the following configurations to install and run Red Hat Ansible Automation Platform: Table 2.8. Tested system configurations Type Description Subscription Valid Red Hat Ansible Automation Platform subscription Operating system Red Hat Enterprise Linux 8.8 or later minor versions of Red Hat Enterprise Linux 8. Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9. CPU architecture x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) Ansible-core Ansible-core version 2.16 or later Browser A currently supported version of Mozilla Firefox or Google Chrome Database PostgreSQL 15 2.3.3. Network ports Red Hat Ansible Automation Platform uses several ports to communicate with its services. These ports must be open and available for incoming connections to the Red Hat Ansible Automation Platform server for it to work. Ensure that these ports are available and are not blocked by the server firewall. Table 2.9. Network ports and protocols Port number Protocol Service Source Destination 80/443 TCP HTTP/HTTPS Event-Driven Ansible Automation hub 80/443 TCP HTTP/HTTPS Event-Driven Ansible Automation controller 80/443 TCP HTTP/HTTPS Automation controller Automation hub 80/443 TCP HTTP/HTTPS HAProxy load balancer Platform gateway 80/443 TCP HTTP/HTTPS Platform gateway Automation controller 80/443 TCP HTTP/HTTPS Platform gateway Automation hub 80/443 TCP HTTP/HTTPS Platform gateway Event-Driven Ansible 5432 TCP PostgreSQL Event-Driven Ansible External database 5432 TCP PostgreSQL Platform gateway External database 5432 TCP PostgreSQL Automation hub External database 5432 TCP PostgreSQL Automation controller External database 6379 TCP Redis Event-Driven Ansible Redis node 6379 TCP Redis Platform gateway Redis node 8443 TCP HTTPS Platform gateway Platform gateway 16379 TCP Redis Redis node Redis node 27199 TCP Receptor Automation controller Hop node and execution node 27199 TCP Receptor Hop node Execution node 2.3.4. Example inventory file Use the example inventory file to perform an installation for this topology: # This is the Ansible Automation Platform enterprise installer inventory file # Consult the docs if you are unsure what to add # For all optional variables consult the Red Hat documentation: # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation # This section is for your platform gateway hosts # ----------------------------------------------------- [automationgateway] gateway1.example.org gateway2.example.org # This section is for your automation controller hosts # ----------------------------------------------------- [automationcontroller] controller1.example.org controller2.example.org [automationcontroller:vars] peers=execution_nodes # This section is for your Ansible Automation Platform execution hosts # ----------------------------------------------------- [execution_nodes] hop1.example.org node_type='hop' exec1.example.org exec2.example.org # This section is for your automation hub hosts # ----------------------------------------------------- [automationhub] hub1.example.org hub2.example.org # This section is for your Event-Driven Ansible controller hosts # ----------------------------------------------------- [automationedacontroller] eda1.example.org eda2.example.org [redis] gateway1.example.org gateway2.example.org hub1.example.org hub2.example.org eda1.example.org eda2.example.org [all:vars] # Common variables # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-general-inventory-variables # ----------------------------------------------------- registry_username=<your RHN username> registry_password=<your RHN password> # Platform gateway # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-gateway-variables # ----------------------------------------------------- automationgateway_admin_password=<set your own> automationgateway_pg_host=<set your own> automationgateway_pg_database=<set your own> automationgateway_pg_username=<set your own> automationgateway_pg_password=<set your own> # Automation controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-controller-variables # ----------------------------------------------------- admin_password=<set your own> pg_host=<set your own> pg_database=<set your own> pg_username=<set your own> pg_password=<set your own> # Automation hub # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-hub-variables # ----------------------------------------------------- automationhub_admin_password=<set your own> automationhub_pg_host=<set your own> automationhub_pg_database=<set your own> automationhub_pg_username=<set your own> automationhub_pg_password=<set your own> # Event-Driven Ansible controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#event-driven-ansible-controller # ----------------------------------------------------- automationedacontroller_admin_password=<set your own> automationedacontroller_pg_host=<set your own> automationedacontroller_pg_database=<set your own> automationedacontroller_pg_username=<set your own> automationedacontroller_pg_password=<set your own> 2.4. RPM mixed enterprise topology The enterprise topology is intended for organizations that require Ansible Automation Platform to be deployed with redundancy or higher compute for large volumes of automation. 2.4.1. Infrastructure topology The following diagram outlines the infrastructure topology that Red Hat has tested with this deployment model that customers can use when self-managing Ansible Automation Platform: Figure 2.4. Infrastructure topology diagram Note Here, automation controller and automation hub are at 2.4x while the Event-Driven Ansible and platform gateway components are at 2.5 Each VM has been tested with the following component requirements: 16 GB RAM, 4 CPUs, 60 GB local disk, and 3000 IOPS. Table 2.10. Infrastructure topology VM count Purpose Ansible Automation Platform version Example VM group names 3 Platform gateway with colocated Redis 2.5 automationgateway 2 Automation controller 2.4 automationcontroller 2 Private automation hub 2.4 automationhub 3 Event-Driven Ansible with colocated Redis 2.5 automationedacontroller 1 Automation mesh hop node 2.4 execution_nodes 2 Automation mesh execution node 2.4 execution_nodes 1 Externally managed database service N/A N/A 1 HAProxy load balancer in front of platform gateway (externally managed) N/A N/A Note 6 VMs are required for a Redis high availability (HA) compatible deployment. Redis can be colocated on each Ansible Automation Platform 2.5 component VM except for automation controller, execution nodes, or the PostgreSQL database. 2.4.2. Tested system configurations Red Hat has tested the following configurations to install and run Red Hat Ansible Automation Platform: Table 2.11. Tested system configurations Type Description Subscription Valid Red Hat Ansible Automation Platform subscription Operating system Red Hat Enterprise Linux 8.8 or later minor versions of Red Hat Enterprise Linux 8. Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9. CPU architecture x86_64, AArch64 Ansible-core Ansible-core version 2.16 or later Browser A currently supported version of Mozilla Firefox or Google Chrome Database PostgreSQL 15 2.4.3. Network ports Red Hat Ansible Automation Platform uses several ports to communicate with its services. These ports must be open and available for incoming connections to the Red Hat Ansible Automation Platform server for it to work. Ensure that these ports are available and are not blocked by the server firewall. Table 2.12. Network ports and protocols Port number Protocol Service Source Destination 80/443 TCP HTTP/HTTPS Event-Driven Ansible Automation hub 80/443 TCP HTTP/HTTPS Event-Driven Ansible Automation controller 80/443 TCP HTTP/HTTPS Automation controller Automation hub 80/443 TCP HTTP/HTTPS HAProxy load balancer Platform gateway 80/443 TCP HTTP/HTTPS Platform gateway Automation controller 80/443 TCP HTTP/HTTPS Platform gateway Automation hub 80/443 TCP HTTP/HTTPS Platform gateway Event-Driven Ansible 5432 TCP PostgreSQL Event-Driven Ansible External database 5432 TCP PostgreSQL Platform gateway External database 5432 TCP PostgreSQL Automation hub External database 5432 TCP PostgreSQL Automation controller External database 6379 TCP Redis Event-Driven Ansible Redis node 6379 TCP Redis Platform gateway Redis node 8443 TCP HTTPS Platform gateway Platform gateway 16379 TCP Redis Redis node Redis node 27199 TCP Receptor Automation controller Hop node and execution node 27199 TCP Receptor Hop node Execution node 2.4.4. Example inventory file Use the example inventory file to perform an installation for this topology: # This is the Ansible Automation Platform mixed enterprise installer inventory file # Consult the docs if you are unsure what to add # For all optional variables consult the Red Hat documentation: # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation # This section is for your platform gateway hosts # ----------------------------------------------------- [automationgateway] gateway1.example.org gateway2.example.org gateway3.example.org # This section is for your Event-Driven Ansible controller hosts # ----------------------------------------------------- [automationedacontroller] eda1.example.org eda2.example.org eda3.example.org [redis] gateway1.example.org gateway2.example.org gateway3.example.org eda1.example.org eda2.example.org eda3.example.org [all:vars] # Common variables # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-general-inventory-variables # ----------------------------------------------------- registry_username=<your RHN username> registry_password=<your RHN password> # Platform gateway # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-gateway-variables # ----------------------------------------------------- automationgateway_admin_password=<set your own> automationgateway_pg_host=<set your own> automationgateway_pg_database=<set your own> automationgateway_pg_username=<set your own> automationgateway_pg_password=<set your own> # Event-Driven Ansible controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#event-driven-ansible-controller # ----------------------------------------------------- automationedacontroller_admin_password=<set your own> automationedacontroller_pg_host=<set your own> automationedacontroller_pg_database=<set your own> automationedacontroller_pg_username=<set your own> automationedacontroller_pg_password=<set your own> | [
"This is the Ansible Automation Platform installer inventory file intended for the RPM growth deployment topology. Consult the Ansible Automation Platform product documentation about this topology's tested hardware configuration. https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/rpm-topologies # Consult the docs if you are unsure what to add For all optional variables consult the Ansible Automation Platform documentation: https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation This section is for your platform gateway hosts ----------------------------------------------------- [automationgateway] gateway.example.org This section is for your automation controller hosts ----------------------------------------------------- [automationcontroller] controller.example.org [automationcontroller:vars] peers=execution_nodes This section is for your Ansible Automation Platform execution hosts ----------------------------------------------------- [execution_nodes] exec.example.org This section is for your automation hub hosts ----------------------------------------------------- [automationhub] hub.example.org This section is for your Event-Driven Ansible controller hosts ----------------------------------------------------- [automationedacontroller] eda.example.org This section is for the Ansible Automation Platform database ----------------------------------------------------- [database] db.example.org Common variables https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-general-inventory-variables ----------------------------------------------------- registry_username=<your RHN username> registry_password=<your RHN password> redis_mode=standalone Platform gateway https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-gateway-variables ----------------------------------------------------- automationgateway_admin_password=<set your own> automationgateway_pg_host=db.example.org automationgateway_pg_password=<set your own> Automation controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-controller-variables ----------------------------------------------------- admin_password=<set your own> pg_host=db.example.org pg_password=<set your own> Automation hub https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-hub-variables ----------------------------------------------------- automationhub_admin_password=<set your own> automationhub_pg_host=db.example.org automationhub_pg_password=<set your own> Event-Driven Ansible controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#event-driven-ansible-controller ----------------------------------------------------- automationedacontroller_admin_password=<set your own> automationedacontroller_pg_host=db.example.org automationedacontroller_pg_password=<set your own>",
"This is the Ansible Automation Platform installer inventory file intended for the mixed RPM growth deployment topology. Consult the Ansible Automation Platform product documentation about this topology's tested hardware configuration. https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/rpm-topologies # Consult the docs if you are unsure what to add For all optional variables consult the Red Hat documentation: https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation This section is for your platform gateway hosts ----------------------------------------------------- [automationgateway] gateway.example.org This section is for your Event-Driven Ansible controller hosts ----------------------------------------------------- [automationedacontroller] eda.example.org Common variables https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-general-inventory-variables ----------------------------------------------------- registry_username=<your RHN username> registry_password=<your RHN password> redis_mode=standalone Platform gateway https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-gateway-variables ----------------------------------------------------- automationgateway_admin_password=<set your own> automationgateway_pg_host=db.example.org automationgateway_pg_password=<set your own> Event-Driven Ansible controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#event-driven-ansible-controller ----------------------------------------------------- automationedacontroller_admin_password=<set your own> automationedacontroller_pg_host=db.example.org automationedacontroller_pg_password=<set your own>",
"This is the Ansible Automation Platform enterprise installer inventory file Consult the docs if you are unsure what to add For all optional variables consult the Red Hat documentation: https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation This section is for your platform gateway hosts ----------------------------------------------------- [automationgateway] gateway1.example.org gateway2.example.org This section is for your automation controller hosts ----------------------------------------------------- [automationcontroller] controller1.example.org controller2.example.org [automationcontroller:vars] peers=execution_nodes This section is for your Ansible Automation Platform execution hosts ----------------------------------------------------- [execution_nodes] hop1.example.org node_type='hop' exec1.example.org exec2.example.org This section is for your automation hub hosts ----------------------------------------------------- [automationhub] hub1.example.org hub2.example.org This section is for your Event-Driven Ansible controller hosts ----------------------------------------------------- [automationedacontroller] eda1.example.org eda2.example.org [redis] gateway1.example.org gateway2.example.org hub1.example.org hub2.example.org eda1.example.org eda2.example.org Common variables https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-general-inventory-variables ----------------------------------------------------- registry_username=<your RHN username> registry_password=<your RHN password> Platform gateway https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-gateway-variables ----------------------------------------------------- automationgateway_admin_password=<set your own> automationgateway_pg_host=<set your own> automationgateway_pg_database=<set your own> automationgateway_pg_username=<set your own> automationgateway_pg_password=<set your own> Automation controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-controller-variables ----------------------------------------------------- admin_password=<set your own> pg_host=<set your own> pg_database=<set your own> pg_username=<set your own> pg_password=<set your own> Automation hub https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-hub-variables ----------------------------------------------------- automationhub_admin_password=<set your own> automationhub_pg_host=<set your own> automationhub_pg_database=<set your own> automationhub_pg_username=<set your own> automationhub_pg_password=<set your own> Event-Driven Ansible controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#event-driven-ansible-controller ----------------------------------------------------- automationedacontroller_admin_password=<set your own> automationedacontroller_pg_host=<set your own> automationedacontroller_pg_database=<set your own> automationedacontroller_pg_username=<set your own> automationedacontroller_pg_password=<set your own>",
"This is the Ansible Automation Platform mixed enterprise installer inventory file Consult the docs if you are unsure what to add For all optional variables consult the Red Hat documentation: https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation This section is for your platform gateway hosts ----------------------------------------------------- [automationgateway] gateway1.example.org gateway2.example.org gateway3.example.org This section is for your Event-Driven Ansible controller hosts ----------------------------------------------------- [automationedacontroller] eda1.example.org eda2.example.org eda3.example.org [redis] gateway1.example.org gateway2.example.org gateway3.example.org eda1.example.org eda2.example.org eda3.example.org Common variables https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-general-inventory-variables ----------------------------------------------------- registry_username=<your RHN username> registry_password=<your RHN password> Platform gateway https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#ref-gateway-variables ----------------------------------------------------- automationgateway_admin_password=<set your own> automationgateway_pg_host=<set your own> automationgateway_pg_database=<set your own> automationgateway_pg_username=<set your own> automationgateway_pg_password=<set your own> Event-Driven Ansible controller https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/appendix-inventory-files-vars#event-driven-ansible-controller ----------------------------------------------------- automationedacontroller_admin_password=<set your own> automationedacontroller_pg_host=<set your own> automationedacontroller_pg_database=<set your own> automationedacontroller_pg_username=<set your own> automationedacontroller_pg_password=<set your own>"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/rpm-topologies |
Chapter 15. TigerVNC | Chapter 15. TigerVNC TigerVNC (Tiger Virtual Network Computing) is a system for graphical desktop sharing which allows you to remotely control other computers. TigerVNC works on the client-server principle: a server shares its output ( vncserver ) and a client ( vncviewer ) connects to the server. 15.1. VNC Server vncserver is a utility which starts a VNC (Virtual Network Computing) desktop. It runs Xvnc with appropriate options and starts a window manager on the VNC desktop. vncserver allows users to run separate sessions in parallel on a machine which can then be accessed by any number of clients from anywhere. 15.1.1. Installing VNC Server To install the TigerVNC server, run the following command as root : 15.1.2. Configuring VNC Server The VNC server can be configured to start a display for one or more users, provided that accounts for the users exist on the system, with optional parameters such as for display settings, network address and port, and security settings. Procedure 15.1. Configuring a VNC Display for a Single User Specify the user name and the display number by editing /etc/sysconfig/vncservers and adding a line in the following format: VNCSERVERS=" display_number : user " The VNC user names must correspond to users of the system. Example 15.1. Setting the Display Number for a User For example, to configure display number 3 for user joe , open the configuration file for editing: Add a line as follows: Save and close the file. In the example above, display number 3 and the user joe are set. Do not use 0 as the display number since the main X display of a workstation is usually indicated as 0. Procedure 15.2. Configuring a VNC Display for Multiple Users To set a VNC display for more than one user, specify the user names and display numbers by editing /etc/sysconfig/vncservers and adding a line in the following format: VNCSERVERS=" display_number : user display_number : user " The VNC user names must correspond to users of the system. Example 15.2. Setting the Display Numbers for Two Users For example, to configure two users, open the configuration file for editing: Add a line as follows: Procedure 15.3. Configuring VNC Display Arguments Specify additional settings in the /etc/sysconfig/vncservers file by adding arguments using the VNCSERVERARGS directive as follows: Table 15.1. Frequently Used VNC Server Parameters VNCSERVERARGS Definition -geometry specifies the size of the VNC desktop to be created, default is 1024x768. -nolisten tcp prevents connections to your VNC server through TCP (Transmission Control Protocol) -localhost prevents remote VNC clients from connecting except when doing so through a secure tunnel See the Xvnc(1) man page for further options. Example 15.3. Setting vncserver Arguments Following on from the example above, to add arguments for two users, edit the /etc/sysconfig/vncservers file as follows: Procedure 15.4. Configuring VNC User Passwords To set the VNC password for all users defined in the /etc/sysconfig/vncservers file, enter the following command as root : To set the VNC password individually for a user: Important The stored password is not encrypted; anyone who has access to the password file can find the plain-text password. 15.1.3. Starting VNC Server In order to start a VNC desktop, the vncserver utility is used. It is a Perl script which simplifies the process of starting an Xvnc server. It runs Xvnc with appropriate options and starts a window manager on the VNC desktop. There are three ways to start vncserver : You can allow vncserver to choose the first available display number, start Xvnc with that display number, and start the default window manager in the Xvnc session. All these steps are provided by one command: You will be prompted to enter a VNC password the first time the command is run if no VNC password has been set. Alternately, you can specify a specific display number: vncserver : display_number vncserver attempts to start Xvnc with that display number and exits if the display number is not available. For example: Alternately, to start VNC server with displays for the users configured in the /etc/sysconfig/vncservers configuration file, as root enter: You can enable the vncserver service automatically at system start. Every time you log in, vncserver is automatically started. As root , run 15.1.4. Terminating a VNC Session Similarly to enabling the vncserver service, you can disable the automatic start of the service at system start: Or, when your system is running, you can stop the service by issuing the following command as root : To terminate a specific display, terminate vncserver using the -kill option along with the display number. Example 15.4. Terminating a Specific Display For example, to terminate display number 2, run: Example 15.5. Terminating an Xvnc process If it is not possible to terminate the VNC service or display, terminate the Xvnc session using the process ID (PID). To view the processes, enter: To terminate process 4290 , enter as root : | [
"~]# yum install tigervnc-server",
"~]# vi /etc/sysconfig/vncservers",
"VNCSERVERS=\"3:joe\"",
"~]# vi /etc/sysconfig/vncservers",
"VNCSERVERS=\"3:joe 4:jill\"",
"VNCSERVERS=\" display_number : user display_number : user \" VNCSERVERARGS[ display_number ]=\" arguments \"",
"VNCSERVERS=\"3:joe 4:jill\" VNCSERVERARGS[1]=\"-geometry 800x600 -nolisten tcp -localhost\" VNCSERVERARGS[2]=\"-geometry 1920x1080 -nolisten tcp -localhost\"",
"~]# vncpasswd Password: Verify:",
"~]# su - user ~]USD vncpasswd Password: Verify:",
"~]USD vncserver",
"~]USD vncserver :20",
"~]# service vncserver start",
"~]# chkconfig vncserver on",
"~]# chkconfig vncserver off",
"~]# service vncserver stop",
"~]# vncserver -kill :2",
"~]USD service vncserver status Xvnc (pid 4290 4189) is running",
"~]# kill -s 15 4290"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/chap-tigervnc |
Chapter 5. OpenShift Data Foundation deployed using local storage devices | Chapter 5. OpenShift Data Foundation deployed using local storage devices 5.1. Replacing operational or failed storage devices on clusters backed by local storage devices You can replace an object storage device (OSD) in OpenShift Data Foundation deployed using local storage devices on the following infrastructures: Bare metal VMware Note There might be a need to replace one or more underlying storage devices. Prerequisites Red Hat recommends that replacement devices are configured with similar infrastructure and resources to the device being replaced. Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Remove the underlying storage device from relevant worker node. Verify that relevant OSD Pod has moved to CrashLoopBackOff state. Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state for more than a few minutes, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Delete any old ocs-osd-removal jobs. Example output: Note The above command must reach Completed state before moving to the steps. This can take more than ten minutes. Navigate to the openshift-storage project. Remove the old OSD from the cluster. The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSDs that are removed from the respective OpenShift Data Foundation nodes. Get the Persistent Volume Claim (PVC) names of the replaced OSDs from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find the relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. <ocs-deviceset-name> Is the name of the relevant device based on the PVC names identified in the step. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Find the persistent volume (PV) that need to be deleted. Example output: Delete the PV. Physically add a new device to the node. Track the provisioning of PVs for the devices that match the deviceInclusionSpec . It can take a few minutes to provision the PVs. Example output: Once the PV is provisioned, a new OSD pod is automatically created for the PV. Delete the ocs-osd-removal job(s). Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verification steps Verify that there is a new OSD running. Example output: Important If the new OSD does not show as Running after a few minutes, restart the rook-ceph-operator pod to force a reconciliation. Example output: Verify that a new PVC is created. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSDs are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and check the OSD status on the storage dashboard. Note A full data recovery may take longer depending on the volume of data being recovered. 5.2. Replacing operational or failed storage devices on IBM Power You can replace an object storage device (OSD) in OpenShift Data Foundation deployed using local storage devices on IBM Power. Note There might be a need to replace one or more underlying storage devices. Prerequisites Red Hat recommends that replacement devices are configured with similar infrastructure and resources to the device being replaced. Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-86bf8cdc8-4nb5t needs to be replaced and worker-0 is the RHOCP node on which the OSD is scheduled. Note The status of the pod is Running if the OSD you want to replace is healthy. Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Important If the rook-ceph-osd pod is in terminating state for more than a few minutes, use the force option to delete the pod. Example output: Remove the old OSD from the cluster so that you can add a new OSD. Identify the DeviceSet associated with the OSD to be replaced. Example output: In this example, the Persistent Volume Claim (PVC) name is ocs-deviceset-localblock-0-data-0-64xjl . Identify the Persistent Volume (PV) associated with the PVC. where, x , y , and pvc-suffix are the values in the DeviceSet identified in an earlier step. Example output: In this example, the associated PV is local-pv-8137c873 . Identify the name of the device to be replaced. where, pv-suffix is the value in the PV name identified in an earlier step. Example output: In this example, the device name is vdc . Identify the prepare-pod associated with the OSD to be replaced. where, x , y , and pvc-suffix are the values in the DeviceSet identified in an earlier step. Example output: In this example, the prepare-pod name is rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc . Delete any old ocs-osd-removal jobs. Example output: Note The above command must reach Completed state before moving to the steps. This can take more than ten minutes. Change to the openshift-storage project. Remove the old OSD from the cluster. The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSDs that are removed from the respective OpenShift Data Foundation nodes. Get the PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod. Example output: For each of the previously identified nodes, do the following: Create a debug pod and chroot to the host on the storage node. <node name> Is the name of the node. Find the relevant device name based on the PVC names identified in the step. <pvc name> Is the name of the PVC. Example output: Remove the mapped device. Important If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find the PID of the process which was stuck. Terminate the process using the kill command. <PID> Is the process ID. Verify that the device name is removed. Find the PV that need to be deleted. Example output: Delete the PV. <pv-name> Is the name of the PV. Replace the old device and use the new device to create a new OpenShift Container Platform PV. Log in to the OpenShift Container Platform node with the device to be replaced. In this example, the OpenShift Container Platform node is worker-0 . Example output: Record the /dev/disk that is to be replaced using the device name, vdc , identified earlier. Example output: Find the name of the LocalVolume CR, and remove or comment out the device /dev/disk that is to be replaced. Example output: Example output: Make sure to save the changes after editing the CR. Log in to the OpenShift Container Platform node with the device to be replaced and remove the old symlink . Example output: Identify the old symlink for the device name to be replaced. In this example, the device name is vdc . Example output: Remove the symlink . Verify that the symlink is removed. Example output: Replace the old device with the new device. Log back into the correct OpenShift Cotainer Platform node and identify the device name for the new drive. The device name must change unless you are resetting the same device. Example output: In this example, the new device name is vdd . After the new /dev/disk is available, you can add a new disk entry to the LocalVolume CR. Edit the LocalVolume CR and add the new /dev/disk . In this example, the new device is /dev/vdd . Example output: Make sure to save the changes after editing the CR. Verify that there is a new PV in Available state and of the correct size. Example output: Create a new OSD for the new device. Deploy the new OSD. You need to restart the rook-ceph-operator to force operator reconciliation. Identify the name of the rook-ceph-operator . Example output: Delete the rook-ceph-operator . Example output: In this example, the rook-ceph-operator pod name is rook-ceph-operator-85f6494db4-sg62v . Verify that the rook-ceph-operator pod is restarted. Example output: Creation of the new OSD may take several minutes after the operator restarts. Delete the ocs-osd-removal job(s). Example output: Note When using an external key management system (KMS) with data encryption, the old OSD encryption key can be removed from the Vault server as it is now an orphan key. Verfication steps Verify that there is a new OSD running. Example output: Verify that a new PVC is created. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the previously identified nodes, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset name(s). Log in to OpenShift Web Console and check the status card in the OpenShift Data Foundation dashboard under Storage section. Note A full data recovery may take longer depending on the volume of data being recovered. 5.3. Replacing operational or failed storage devices on IBM Z or IBM LinuxONE infrastructure You can replace operational or failed storage devices on IBM Z or IBM(R) LinuxONE infrastructure with new Small Computer System Interface (SCSI) disks. IBM Z or IBM(R) LinuxONE supports SCSI FCP disk logical units (SCSI disks) as persistent storage devices from external disk storage. You can identify a SCSI disk using its FCP Device number, two target worldwide port names (WWPN1 and WWPN2), and the logical unit number (LUN). For more information, see https://www.ibm.com/support/knowledgecenter/SSB27U_6.4.0/com.ibm.zvm.v640.hcpa5/scsiover.html Prerequisites Ensure that the data is resilient. In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab, and then click ocs-storagecluster-storagesystem . In the Status card of Block and File dashboard, under the Overview tab, verify that Data Resiliency has a green tick mark. Procedure List all the disks. Example output: A SCSI disk is represented as a zfcp-lun with the structure <device-id>:<wwpn>:<lun-id> in the ID section. The first disk is used for the operating system. If one storage device fails, you can replace it with a new disk. Remove the disk. Run the following command on the disk, replacing scsi-id with the SCSI disk identifier of the disk to be replaced: For example, the following command removes one disk with the device ID 0.0.8204 , the WWPN 0x500507630a0b50a4 , and the LUN 0x4002403000000000 : Append a new SCSI disk. Note The device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID. List all the FCP devices to verify the new disk is configured. Example output: | [
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>",
"osd_id_to_remove=0",
"oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found in openshift-storage namespace.",
"oc delete -n openshift-storage pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --grace-period=0 --force",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/ <node name>",
"chroot /host",
"dmsetup ls| grep <pvc name>",
"ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose <ocs-deviceset-name>",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc get pv -L kubernetes.io/hostname | grep <storageclass-name> | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <pv_name>",
"oc -n openshift-local-storage describe localvolumeset <lvs-name>",
"[...] Status: Conditions: Last Transition Time: 2020-11-17T05:03:32Z Message: DiskMaker: Available, LocalProvisioner: Available Status: True Type: DaemonSetsAvailable Last Transition Time: 2020-11-17T05:03:34Z Message: Operator reconciled successfully. Status: True Type: Available Observed Generation: 1 Total Provisioned Device Count: 4 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Discovered 2m30s (x4 localvolumeset- node.example.com - NewDevice over 2m30s) symlink-controller found possible matching disk, waiting 1m to claim Normal FoundMatch 89s (x4 localvolumeset- node.example.com - ingDisk over 89s) symlink-controller symlinking matching disk",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h",
"oc delete pod -n openshift-storage -l app=rook-ceph-operator",
"pod \"rook-ceph-operator-6f74fb5bff-2d982\" deleted",
"oc get -n openshift-storage pvc | grep <lvs-name>",
"ocs-deviceset-0-0-c2mqb Bound local-pv-b481410 1490Gi RWO localblock 5m ocs-deviceset-1-0-959rp Bound local-pv-414755e0 1490Gi RWO localblock 1d20h ocs-deviceset-2-0-79j94 Bound local-pv-3e8964d3 1490Gi RWO localblock 1d20h",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node name>",
"chroot /host",
"lsblk",
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-86bf8cdc8-4nb5t 0/1 crashLoopBackOff 0 24h 10.129.2.26 worker-0 <none> <none> rook-ceph-osd-1-7c99657cfb-jdzvz 1/1 Running 0 24h 10.128.2.46 worker-1 <none> <none> rook-ceph-osd-2-5f9f6dfb5b-2mnw9 1/1 Running 0 24h 10.131.0.33 worker-2 <none> <none>",
"osd_id_to_remove=0",
"oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0",
"deployment.extensions/rook-ceph-osd-0 scaled",
"oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}",
"No resources found in openshift-storage namespace.",
"oc delete -n openshift-storage pod rook-ceph-osd-0-86bf8cdc8-4nb5t --grace-period=0 --force",
"warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-86bf8cdc8-4nb5t\" force deleted",
"oc get -n openshift-storage -o yaml deployment rook-ceph-osd-USD{osd_id_to_remove} | grep ceph.rook.io/pvc",
"ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-64xjl ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-64xjl",
"oc get -n openshift-storage pvc ocs-deviceset- <x> - <y> - <pvc-suffix>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ocs-deviceset-localblock-0-data-0-64xjl Bound local-pv-8137c873 256Gi RWO localblock 24h",
"oc get pv local-pv- <pv-suffix> -o yaml | grep path",
"path: /mnt/local-storage/localblock/vdc",
"oc describe -n openshift-storage pvc ocs-deviceset- <x> - <y> - <pvc-suffix> | grep Used",
"Used By: rook-ceph-osd-prepare-ocs-deviceset-localblock-0-data-0-64knzkc",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc project openshift-storage",
"oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -",
"oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'",
"2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1",
"oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'",
"2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"",
"oc debug node/ <node name>",
"chroot /host",
"dmsetup ls| grep <pvc name>",
"ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)",
"cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt",
"ps -ef | grep crypt",
"kill -9 <PID>",
"dmsetup ls",
"oc get pv -L kubernetes.io/hostname | grep localblock | grep Released",
"local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1",
"oc delete pv <pv-name>",
"oc debug node/worker-0",
"Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.88.21 If you don't see a command prompt, try pressing enter. chroot /host",
"ls -alh /mnt/local-storage/localblock",
"total 0 drwxr-xr-x. 2 root root 17 Nov 18 15:23 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 .. lrwxrwxrwx. 1 root root 8 Nov 18 15:23 vdc -> /dev/vdc",
"oc get -n openshift-local-storage localvolume",
"NAME AGE localblock 25h",
"oc edit -n openshift-local-storage localvolume localblock",
"[...] storageClassDevices: - devicePaths: # - /dev/vdc storageClassName: localblock volumeMode: Block [...]",
"oc debug node/worker-0",
"Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.88.21 If you don't see a command prompt, try pressing enter. chroot /host",
"ls -alh /mnt/local-storage/localblock",
"total 0 drwxr-xr-x. 2 root root 17 Nov 18 15:23 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 .. lrwxrwxrwx. 1 root root 8 Nov 18 15:23 vdc -> /dev/vdc",
"rm /mnt/local-storage/localblock/vdc",
"ls -alh /mnt/local-storage/localblock",
"total 0 drwxr-xr-x. 2 root root 6 Nov 18 17:11 . drwxr-xr-x. 3 root root 24 Nov 18 15:23 ..",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 40G 0 disk |-vda1 252:1 0 4M 0 part |-vda2 252:2 0 384M 0 part /boot `-vda4 252:4 0 39.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 39.6G 0 dm /sysroot vdb 252:16 0 512B 1 disk vdd 252:32 0 256G 0 disk",
"oc edit -n openshift-local-storage localvolume localblock",
"[...] storageClassDevices: - devicePaths: # - /dev/vdc - /dev/vdd storageClassName: localblock volumeMode: Block [...]",
"oc get pv | grep 256Gi",
"local-pv-1e31f771 256Gi RWO Delete Bound openshift-storage/ocs-deviceset-localblock-2-data-0-6xhkf localblock 24h local-pv-ec7f2b80 256Gi RWO Delete Bound openshift-storage/ocs-deviceset-localblock-1-data-0-hr2fx localblock 24h local-pv-8137c873 256Gi RWO Delete Available localblock 32m",
"oc get -n openshift-storage pod -l app=rook-ceph-operator",
"NAME READY STATUS RESTARTS AGE rook-ceph-operator-85f6494db4-sg62v 1/1 Running 0 1d20h",
"oc delete -n openshift-storage pod rook-ceph-operator-85f6494db4-sg62v",
"pod \"rook-ceph-operator-85f6494db4-sg62v\" deleted",
"oc get -n openshift-storage pod -l app=rook-ceph-operator",
"NAME READY STATUS RESTARTS AGE rook-ceph-operator-85f6494db4-wx9xx 1/1 Running 0 50s",
"oc delete -n openshift-storage job ocs-osd-removal-job",
"job.batch \"ocs-osd-removal-job\" deleted",
"oc get -n openshift-storage pods -l app=rook-ceph-osd",
"rook-ceph-osd-0-76d8fb97f9-mn8qz 1/1 Running 0 23m rook-ceph-osd-1-7c99657cfb-jdzvz 1/1 Running 1 25h rook-ceph-osd-2-5f9f6dfb5b-2mnw9 1/1 Running 0 25h",
"oc get -n openshift-storage pvc | grep localblock",
"ocs-deviceset-localblock-0-data-0-q4q6b Bound local-pv-8137c873 256Gi RWO localblock 10m ocs-deviceset-localblock-1-data-0-hr2fx Bound local-pv-ec7f2b80 256Gi RWO localblock 1d20h ocs-deviceset-localblock-2-data-0-6xhkf Bound local-pv-1e31f771 256Gi RWO localblock 1d20h",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node name>",
"chroot /host",
"lsblk",
"lszdev",
"TYPE ID zfcp-host 0.0.8204 yes yes zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500407630c0b50a4:0x3002b03000000000 yes yes sdb sg1 qeth 0.0.bdd0:0.0.bdd1:0.0.bdd2 yes no encbdd0 generic-ccw 0.0.0009 yes no",
"chzdev -d scsi-id",
"chzdev -d 0.0.8204:0x500407630c0b50a4:0x3002b03000000000",
"chzdev -e 0.0.8204:0x500507630b1b50a4:0x4001302a00000000",
"lszdev zfcp-lun",
"TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_devices/openshift_data_foundation_deployed_using_local_storage_devices |
9.4. NUMA-Aware Kernel SamePage Merging (KSM) | 9.4. NUMA-Aware Kernel SamePage Merging (KSM) Kernel SamePage Merging (KSM) allows virtual machines to share identical memory pages. KSM can detect that a system is using NUMA memory and control merging pages across different NUMA nodes. Use the sysfs /sys/kernel/mm/ksm/merge_across_nodes parameter to control merging of pages across different NUMA nodes. By default, pages from all nodes can be merged together. When this parameter is set to zero, only pages from the same node are merged. Generally, unless you are oversubscribing the system memory, you will get better runtime performance by disabling KSM sharing. Important When KSM merges across nodes on a NUMA host with multiple guest virtual machines, guests and CPUs from more distant nodes can suffer a significant increase of access latency to the merged KSM page. To instruct the hypervisor to disable share pages for a guest, add the following to the guest's XML: For more information about tuning memory settings with the <memoryBacking> element, see Section 8.2.2, "Memory Tuning with virsh" . | [
"<memoryBacking> <nosharepages/> </memoryBacking>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-numa-numa_ksm |
Chapter 2. Multipath Devices | Chapter 2. Multipath Devices Without DM Multipath, each path from a server node to a storage controller is treated by the system as a separate device, even when the I/O path connects the same server node to the same storage controller. DM Multipath provides a way of organizing the I/O paths logically, by creating a single multipath device on top of the underlying devices. 2.1. Multipath Device Identifiers Each multipath device has a World Wide Identifier (WWID), which is guaranteed to be globally unique and unchanging. By default, the name of a multipath device is set to its WWID. Alternately, you can set the user_friendly_names option in the multipath configuration file, which sets the alias to a node-unique name of the form mpath n . For example, a node with two HBAs attached to a storage controller with two ports by means of a single unzoned FC switch sees four devices: /dev/sda , /dev/sdb , dev/sdc , and /dev/sdd . DM Multipath creates a single device with a unique WWID that reroutes I/O to those four underlying devices according to the multipath configuration. When the user_friendly_names configuration option is set to yes , the name of the multipath device is set to mpath n . When new devices are brought under the control of DM Multipath, the new devices may be seen in two different places under the /dev directory: /dev/mapper/mpath n and /dev/dm- n . The devices in /dev/mapper are created early in the boot process. Use these devices to access the multipathed devices, for example when creating logical volumes. Any devices of the form /dev/dm- n are for internal use only should never be used by the administrator directly. For information on the multipath configuration defaults, including the user_friendly_names configuration option, see Section 4.3, "Configuration File Defaults" . You can also set the name of a multipath device to a name of your choosing by using the alias option in the multipaths section of the multipath configuration file. For information on the multipaths section of the multipath configuration file, see Section 4.4, "Multipaths Device Configuration Attributes" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/mpath_devices |
Chapter 9. OpenID Connect in JBoss EAP | Chapter 9. OpenID Connect in JBoss EAP Use the JBoss EAP native OpenID Connect (OIDC) client to secure your applications through an external OpenID provider. OIDC is an identity layer that enables clients, such as JBoss EAP, to verify a user's identity based on OpenID provider authentication. For example, you can secure your JBoss EAP applications using Red Hat Single Sign-On as the OpenID provider. 9.1. OpenID Connect configuration in JBoss EAP When you secure your applications using an OpenID provider, you do not need to configure any security domain resources locally. The elytron-oidc-client subsystem provides a native OpenID Connect (OIDC) client in JBoss EAP to connect with OpenID providers. JBoss EAP automatically creates a virtual security domain for your application, based on your OpenID provider configurations. Important It is recommended to use the OIDC client with Red Hat Single Sign-On. You can use other OpenID providers if they can be configured to use access tokens that are JSON Web Tokens (JWTs) and can be configured to use the RS256, RS384, RS512, ES256, ES384, or ES512 signature algorithm. To enable the use of OIDC, you can configure either the elytron-oidc-client subsystem or an application itself. JBoss EAP activates the OIDC authentication as follows: When you deploy an application to JBoss EAP, the elytron-oidc-client subsystem scans the deployment to detect if the OIDC authentication mechanism is required. If the subsystem detects OIDC configuration for the deployment in either the elytron-oidc-client subsystem or the application deployment descriptor, JBoss EAP enables the OIDC authentication mechanism for the application. If the subsystem detects OIDC configuration in both places, the configuration in the elytron-oidc-client subsystem secure-deployment attribute takes precedence over the configuration in the application deployment descriptor. Note The keycloak-client-oidc layer to secure your applications with Red Hat Single Sign-On is deprecated in JBoss EAP XP 4.0.0. Use the native OIDC client provided by the elytron-oidc-client subsystem instead. Deployment configuration To secure an application with OIDC by using a deployment descriptor, update the application's deployment configuration as follows: Create a file called oidc.json in the WEB-INF directory with the OIDC configuration information. Example oidc.json contents { "client-id" : "customer-portal", 1 "provider-url" : "http://localhost:8180/auth/realms/demo", 2 "ssl-required" : "external", 3 "credentials" : { "secret" : "234234-234234-234234" 4 } } 1 The name to identify the OIDC client with the OpenID provider. 2 The OpenID provider URL. 3 Require HTTPS for external requests. 4 The client secret that was registered with the OpenID provider. Set the auth-method property to OIDC in the application deployment descriptor web.xml file. Example deployment descriptor update <login-config> <auth-method>OIDC</auth-method> </login-config> Subsystem configuration You can secure applications with OIDC by configuring the elytron-oidc-client subsystem in the following ways: Create a single configuration for multiple deployments if you use the same OpenID provider for each application. Create a different configuration for each deployment if you use different OpenID providers for different applications. Example XML configuration for a single deployment: <subsystem xmlns="urn:wildfly:elytron-oidc-client:1.0"> <secure-deployment name="DEPLOYMENT_RUNTIME_NAME.war"> 1 <client-id>customer-portal</client-id> 2 <provider-url>http://localhost:8180/auth/realms/demo</provider-url> 3 <ssl-required>external</ssl-required> 4 <credential name="secret" secret="0aa31d98-e0aa-404c-b6e0-e771dba1e798" /> 5 </secure-deployment </subsystem> 1 The deployment runtime name. 2 The name to identify the OIDC client with the OpenID provider. 3 The OpenID provider URL. 4 Require HTTPS for external requests. 5 The client secret that was registered with the OpenID provider. To secure multiple applications using the same OpenID provider, configure the provider separately, as shown in the example: <subsystem xmlns="urn:wildfly:elytron-oidc-client:1.0"> <provider name=" USD{OpenID_provider_name} "> <provider-url>http://localhost:8080/auth/realms/demo</provider-url> <ssl-required>external</ssl-required> </provider> <secure-deployment name="customer-portal.war"> 1 <provider> USD{OpenID_provider_name} </provider> <client-id>customer-portal</client-id> <credential name="secret" secret="0aa31d98-e0aa-404c-b6e0-e771dba1e798" /> </secure-deployment> <secure-deployment name="product-portal.war"> 2 <provider> USD{OpenID_provider_name} </provider> <client-id>product-portal</client-id> <credential name="secret" secret="0aa31d98-e0aa-404c-b6e0-e771dba1e798" /> </secure-deployment> </subsystem> 1 A deployment: customer-portal.war 2 Another deployment: product-portal.war Additional resources OpenID Connect specification elytron-oidc-client subsystem attributes OpenID Connect Libraries Securing applications using OpenID Connect with Red Hat Single Sign-On MicroProfile JWT 9.2. Enabling the elytron-oidc-client subsystem The elytron-oidc-client subsystem is provided in the standalone-microprofile.xml configuration file. To use it, you must start your server with the bin/standalone.sh -c standalone-microprofile.xml command. You can include the elytron-oidc-client subsystem in the standalone.xml configuration by enabling it using the management CLI. Prerequisites You have installed JBoss EAP XP. Procedure Add the elytron-oidc-client extension using the management CLI. Enable the elytron-oidc-client subsystem using the management CLI. Reload JBoss EAP. You can now use the elytron-oidc-client subsystem by starting the server normally, with the command bin/standalone.sh Additional resources elytron-oidc-client subsystem attributes 9.3. Securing applications using OpenID Connect with Red Hat Single Sign-On You can use OpenID Connect (OIDC) to delegate authentication to an external OpenID provider. The elytron-oidc-client subsystem provides a native OIDC client in JBoss EAP to connect with external OpenID providers. To create an application secured with OpenID Connect using Red Hat Single Sign-On, follow these procedures: Configure Red Hat Single Sign-On as the OpenID provider Create a Maven project for your application Create an application that uses OpenID Connect Restrict access to your application based on user roles Create and assign user roles in Red Hat Single Sign-On 9.3.1. Configuring Red Hat Single Sign-On as an OpenID provider Red Hat Single Sign-On is an identity and access management provider for securing web applications with single sign-on (SSO). It supports OpenID Connect (an extension to OAuth 2.0). Prerequisites You have installed the Red Hat Single Sign-On server. For more information, see Installing the Red Hat Single Sign-On server in the Red Hat Single Sign-On Getting Started Guide . You have created a user in your Red Hat Single Sign-On server instance. For more information, see Creating a user in the Red Hat Single Sign-On Getting Started Guide . Procedure Start the Red Hat Single Sign-On server at a port other than 8080 because JBoss EAP default port is 8080. Syntax Example Log in to the Admin Console at http://localhost:<port>/auth/ . For example, http://localhost:8180/auth/ . To create a realm, in the Admin Console, hover over Master , and click Add realm . Enter a name for the realm. For example, example_realm . Ensure that Enabled is ON and click Create . Click Users , then click Add user to add a user to the realm. Enter a user name. For example, jane_doe . Ensure that User Enabled is ON and click Save . Click Credentials to add a password to the user. Set a password for the user. For example, janedoep@USDUSD . Toggle Temporary to OFF and click Set Password . In the confirmation prompt, click Set password . Click Clients , then click Create to configure a client connection. Enter a client ID. For example, my_jbeap . Ensure that Client Protocol is set to openid-connect , and click Save . Click Installation , then select Keycloak OIDC JSON as the Format Option to see the connection parameters. { "realm": "example_realm", "auth-server-url": "http://localhost:8180/auth/", "ssl-required": "external", "resource": "my_jbeap", "public-client": true, "confidential-port": 0 } When configuring your JBoss EAP application to use Red Hat Single Sign-On as the identity provider, you use the parameters as follows: Click Clients , click Edit to my_jbeap to edit the client settings. In Valid Redirect URIs , enter the URL where the page should redirect after authentication is successful. For this example, set this value to http://localhost:8080/simple-oidc-example/secured/* Additional resources Configuring a Maven project for creating a secure application Creating a realm and a user 9.3.2. Configuring a Maven project for creating a secure application Create a Maven project with the required dependencies and the directory structure for creating a secure application. Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . You have configured your Maven repository for the latest release. For more information, see Maven and the JBoss EAP microprofile maven repository . Procedure Set up a Maven project using the mvn command. The command creates the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory: Syntax Example Update the generated pom.xml file as follows: Set the following properties: <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <failOnMissingWebXml>false</failOnMissingWebXml> <version.server.bom>4.0.0.GA</version.server.bom> <version.server.bootable-jar>4.0.0.GA</version.server.bootable-jar> <version.wildfly-jar.maven.plugin>4.0.0.GA</version.wildfly-jar.maven.plugin> </properties> Set the following dependencies: <dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>3.1.0.redhat-1</version> <scope>provided</scope> </dependency> </dependencies> Set the following build configuration to use mvn widlfy:deploy to deploy the application: <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>2.1.0.Final</version> </plugin> </plugins> </build> Verification In the application root directory, enter the following command: You get an output similar to the following: You can now create your secure application. Additional resources Creating a secure application that uses OpenID Connect 9.3.3. Creating a secure application that uses OpenID Connect You can secure an application by either updating its deployment configuration or by configuring the elytron-oidc-client subsystem. The following example demonstrates creating a servlet that prints a logged-in user's Principal. For an existing application, only those steps that are related to updating the deployment configuration or the elytron-oidc-client subsystem are required. In this example, the value of the Principal comes from the ID token from the OpenID provider. By default, the Principal is the value of the "sub" claim from the token. You can specify which claim value from the ID token to use as the Principal in one of the following: The elytron-oidc-client subsystem attribute principal-attribute . The oidc.json file. <application_root> in the procedure denotes the pom.xml file directory. The pom.xml file contains your application's Maven configuration. Prerequisites You have created a Maven project. For more information, see Configuring Maven project for creating a secure application . You have configured Red Hat Single Sign-On as the OpenID provider. For more information, see Configuring Red Hat Single Sign-On as an OpenID provider . You have enabled the elytron-oidc-client subsystem. For more information, see Enabling the elytron-oidc-client subsystem Procedure Create a directory to store the Java files. Syntax Example Navigate to the new directory. Syntax Example Create a servlet "SecuredServlet.java" with the following content: package com.example.oidc; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. * */ @WebServlet("/secured") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println("<html>"); writer.println(" <head><title>Secured Servlet</title></head>"); writer.println(" <body>"); writer.println(" <h1>Secured Servlet</h1>"); writer.println(" <p>"); writer.print(" Current Principal '"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : "NO AUTHENTICATED USER"); writer.print("'"); writer.println(" </p>"); writer.println(" </body>"); writer.println("</html>"); } } } Add security rules for access to your application in the deployment descriptor web.xml file located in the WEB-INF directory of the application. <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" metadata-complete="false"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app> To secure the application with OpenID Connect, either update the deployment configuration or configure the elytron-oidc-client subsystem. Note If you configure OpenID Connect in both the deployment configuration and the elytron-oidc-client subsystem, the configuration in the elytron-oidc-client subsystem secure-deployment attribute takes precedence over the configuration in the application deployment descriptor. Updating the deployment configuration: Create a file oidc.json in the WEB-INF directory, like this: { "provider-url" : "http://localhost:8180/auth/realms/example_realm", "ssl-required": "external", "client-id": "my_jbeap", "public-client": true, "confidential-port": 0 } Update the deployment descriptor web.xml file with the following text to declare that this application uses OIDC: <login-config> <auth-method>OIDC</auth-method> </login-config> Configuring the elytron-oidc-client subsystem: To secure your application, use the following management CLI command: In the application root directory, compile your application with the following command: Deploy the application. Verification In a browser, navigate to http://localhost:8080/simple-oidc-example/secured . Log in with your credentials. For example: You get the following output: You can now log in to the application using the credentials you configured in the Red Hat Single Sign-On as the OpenID provider. Additional resources OpenID Connect configuration in JBoss EAP Restricting access to applications based on user roles 9.3.4. Restricting access to applications based on user roles You can restrict access to all, or parts, of your application based on user roles. For example, you can let users with the "public" role have access to the parts of your application that aren't sensitive, and give users with the "admin" role access to those parts that are. Prerequisites You have secured your application using OpenID Connect. For more information, see Creating a secure application that uses OpenID Connect . Procedure Update the deployment descriptor web.xml file with the following text: Syntax Example 1 Allow only those users with the role example_role to access your application. In the application root directory, recompile your application with the following command: Deploy the application. Verification In a browser, navigate to http://localhost:8080/simple-oidc-example/secured . Log in with your credentials. For example: You get the following output: Because you have not assigned the required role to the user "jane_doe," jane_doe can't log in to your application. Only the users with the required role can log in. To assign users the required roles, see Creating and assigning roles to users in Red Hat Single Sign-On . 9.3.5. Creating and assigning user roles in Red Hat Single Sign-On Red Hat Single Sign-On is an identity and access management provider for securing your web applications with single sign-on (SSO). You can define users and assign roles in Red Hat Single Sign-On. Prerequisites You have configured Red Hat Single Sign-On. For more information, see Configuring Red Hat Single Sign-On as an OpenID provider . Procedure Log in to the admin console at http://localhost:<port>/auth/ . For example, http://localhost:8180/auth/ . Click the realm you use to connect with JBoss EAP. For example, example_realm . Click Clients , then click the client-name you configured for JBoss EAP. For example, my_jbeap . Click Roles , then Add Role . Enter a role name, such as example_role , then click Save . This is the role name you configure in JBoss EAP for authorization. Click Users , then View all users . Click an ID to assign the role you created. For example, click the ID for jane_doe . Click Role Mappings . In the Client Roles field, select the client-name you configured for JBoss EAP. For example, my_jbeap . In Available Roles , select a role to assign. For example, example_role . Click Add selected . Verification In a browser, navigate to the application URL. Log in with your credentials. For example: You get the following output: Users with the required role can log in to your application. Additional resources Assigning permissions and access using roles and groups in Red Hat Single Sign-On 9.4. Developing JBoss EAP bootable jar application with OpenID Connect You can use OpenID Connect (OIDC) to delegate authentication to an external OpenID provider. The elytron-oidc-client galleon layer provides a native OIDC client in JBoss EAP bootable jar applications to connect with external OpenID providers. To create an application secured with OpenID Connect using Red Hat Single Sign-On, follow these procedures: Configure Red Hat Single Sign-On as the OpenID provider Create a Maven project for your application Create a bootable jar application that uses OpenID Connect Restrict access to your application based on user roles Create and assign user roles in Red Hat Single Sign-On 9.4.1. Configuring Red Hat Single Sign-On as an OpenID provider Red Hat Single Sign-On is an identity and access management provider for securing web applications with single sign-on (SSO). It supports OpenID Connect (an extension to OAuth 2.0). Prerequisites You have installed the Red Hat Single Sign-On server. For more information, see Installing the Red Hat Single Sign-On server in the Red Hat Single Sign-On Getting Started Guide . You have created a user in your Red Hat Single Sign-On server instance. For more information, see Creating a user in the Red Hat Single Sign-On Getting Started Guide . Procedure Start the Red Hat Single Sign-On server at a port other than 8080 because JBoss EAP default port is 8080. Syntax Example Log in to the Admin Console at http://localhost:<port>/auth/ . For example, http://localhost:8180/auth/ . To create a realm, in the Admin Console, hover over Master , and click Add realm . Enter a name for the realm. For example, example_realm . Ensure that Enabled is ON and click Create . Click Users , then click Add user to add a user to the realm. Enter a user name. For example, jane_doe . Ensure that User Enabled is ON and click Save . Click Credentials to add a password to the user. Set a password for the user. For example, janedoep@USDUSD . Toggle Temporary to OFF and click Set Password . In the confirmation prompt, click Set password . Click Clients , then click Create to configure a client connection. Enter a client ID. For example, my_jbeap . Ensure that Client Protocol is set to openid-connect , and click Save . Click Installation , then select Keycloak OIDC JSON as the Format Option to see the connection parameters. { "realm": "example_realm", "auth-server-url": "http://localhost:8180/auth/", "ssl-required": "external", "resource": "my_jbeap", "public-client": true, "confidential-port": 0 } When configuring your JBoss EAP application to use Red Hat Single Sign-On as the identity provider, you use the parameters as follows: Click Clients , click Edit to my_jbeap to edit the client settings. In Valid Redirect URIs , enter the URL where the page should redirect after authentication is successful. For this example, set this value to http://localhost:8080/simple-oidc-layer-example/secured/* Additional resources Configuring a Maven project for creating a secure application Creating a realm and a user 9.4.2. Configuring a Maven project for a bootable jar OIDC application Create a Maven project with the required dependencies and the directory structure for creating a bootable jar application that uses OpenID Connect. The elytron-oidc-client galleon layer provides a native OpenID Connect (OIDC) client to connect with OpenID providers. Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . You have configured your Maven repository for the latest release. For more information, see Maven and the JBoss EAP microprofile Maven repository . Procedure Set up a Maven project using the mvn command. The command creates the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory. Syntax Example Update the generated pom.xml file as follows: Set the following repositories: <repositories> <repository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> Set the following plugin repositories: <pluginRepositories> <pluginRepository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> Set the following properties: <properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <bootable.jar.maven.plugin.version>6.1.2.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>4.0.0.GA-redhat-00002</jboss.xp.galleon.feature.pack.version> </properties> Set the following dependencies: <dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>3.1.0.redhat-1</version> <scope>provided</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>7.3.4.GA</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.servlet</groupId> <artifactId>jboss-servlet-api_4.0_spec</artifactId> <scope>provided</scope> </dependency> </dependencies> </dependencyManagement> Set the following build configuration in the <build> element of the pom.xml file: <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> 1 <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>elytron-oidc-client</layer> 2 </layers> <context-root>false</context-root> 3 </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> 1 JBoss EAP Maven plug-in to build the application as a bootable JAR 2 The elytron-oidc-client layer provides a native OpenID Connect (OIDC) client to connect with external OpenID providers. 3 Register the application in the simple-oidc-layer-example resource path. The servlet is then available at the URL http:// server-url / application_name / servlet_path , for example: http://localhost:8080/simple-oidc-layer-example/secured . By default, the application WAR file is registered under the root-context path, like http:// server-url / servlet_path , for example: http://localhost:8080/secured . Set the application name, for example "simple-oidc-layer-example" in the <build> element of the pom.xml file. Verification In the application root directory, enter the following command: You get an output similar to the following: You can now create a bootable jar application that uses OpenID Connect 9.4.3. Creating a bootable jar application that uses OpenID Connect The following example demonstrates creating a servlet that prints a logged-in user's Principal. For an existing application, only those steps that are related to updating the deployment configuration are required. In this example, the value of the Principal comes from the ID token from the OpenID provider. By default, the Principal is the value of the "sub" claim from the token. You can specify which claim value from the ID token to use as the Principal in one of the following: The elytron-oidc-client subsystem attribute principal-attribute . The oidc.json file. <application_root> in the procedure denotes the pom.xml file directory. The pom.xml file contains your application's Maven configuration. Prerequisites You have created a Maven project. For more information, see Configuring Maven project for creating a secure application . You have configured Red Hat Single Sign-On as the OpenID provider. For more information, see Configuring Red Hat Single Sign-On as an OpenID provider . Procedure Create a directory to store the Java files. Syntax Example Navigate to the new directory. Syntax Example Create a servlet "SecuredServlet.java" with the following content: package com.example.oidc; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. * */ @WebServlet("/secured") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println("<html>"); writer.println(" <head><title>Secured Servlet</title></head>"); writer.println(" <body>"); writer.println(" <h1>Secured Servlet</h1>"); writer.println(" <p>"); writer.print(" Current Principal '"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : "NO AUTHENTICATED USER"); writer.print("'"); writer.println(" </p>"); writer.println(" </body>"); writer.println("</html>"); } } } Add security rules for access to your application in the deployment descriptor web.xml file located in the WEB-INF directory of the application. <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" metadata-complete="false"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app> To secure the application with OpenID Connect, either update the deployment configuration or configure the elytron-oidc-client subsystem. Note If you configure OpenID Connect in both the deployment configuration and the elytron-oidc-client subsystem, the configuration in the elytron-oidc-client subsystem secure-deployment attribute takes precedence over the configuration in the application deployment descriptor. Updating the deployment configuration: Create a file oidc.json in the WEB-INF directory, like this: { "provider-url" : "http://localhost:8180/auth/realms/example_realm", "ssl-required": "external", "client-id": "my_jbeap", "public-client": true, "confidential-port": 0 } Update the deployment descriptor web.xml file with the following text to declare that this application uses OIDC: <login-config> <auth-method>OIDC</auth-method> </login-config> Configuring the elytron-oidc-client subsystem: Create a directory to store a CLI script in the application root directory: Syntax Example You can create the directory at any place that Maven can access, inside the application root directory. Create a CLI script, such as configure-oidc.cli , with the following content: The subsystem command defines the simple-oidc-layer-example.war resource as the deployment to secure in elytron-oidc-client subsystem. In the project pom.xml file, add the following configuration extract to the existing plug-in <configuration> element: <cli-sessions> <cli-session> <script-files> <script>scripts/configure-oidc.cli</script> </script-files> </cli-session> </cli-sessions> In the application root directory, compile your application with the following command: Deploy the bootable jar application using the following command: Syntax Example This starts JBoss EAP and deploys the application. Verification In a browser, navigate to http://localhost:8080/simple-oidc-layer-example/secured . Log in with your credentials. For example: You get the following output: You can now log in to the application using the credentials you configured in the Red Hat Single Sign-On as the OpenID provider. Additional resources OpenID Connect configuration in JBoss EAP Restricting access to applications based on user roles 9.4.4. Restricting access based on user roles in bootable jar OIDC applications You can restrict access to all, or parts, of your application based on user roles. For example, you can let users with the "public" role have access to the parts of your application that aren't sensitive, and give users with the "admin" role access to those parts that are. Prerequisites You have secured your application using OpenID Connect. For more information, see Creating a bootable jar application that uses OpenID Connect . Procedure Update the deployment descriptor web.xml file with the following text: Syntax Example 1 Allow only those users with the role example_role to access your application. In the application root directory, recompile your application with the following command: Deploy the application. This starts JBoss EAP and deploys the application. Verification In a browser, navigate to \localhost:8080/simple-oidc-layer-example/secured . Log in with your credentials. For example: You get the following output: Because you have not assigned the required role to the user "jane_doe," jane_doe can't log in to your application. Only the users with the required role can log in. To assign users the required roles, see Creating and assigning roles to users in Red Hat Single Sign-On . 9.4.5. Creating and assigning user roles in Red Hat Single Sign-On Red Hat Single Sign-On is an identity and access management provider for securing your web applications with single sign-on (SSO). You can define users and assign roles in Red Hat Single Sign-On. Prerequisites You have configured Red Hat Single Sign-On. For more information, see Configuring Red Hat Single Sign-On as an OpenID provider . Procedure Log in to the admin console at http://localhost:<port>/auth/ . For example, http://localhost:8180/auth/ . Click the realm you use to connect with JBoss EAP. For example, example_realm . Click Clients , then click the client-name you configured for JBoss EAP. For example, my_jbeap . Click Roles , then Add Role . Enter a role name, such as example_role , then click Save . This is the role name you configure in JBoss EAP for authorization. Click Users , then View all users . Click an ID to assign the role you created. For example, click the ID for jane_doe . Click Role Mappings . In the Client Roles field, select the client-name you configured for JBoss EAP. For example, my_jbeap . In Available Roles , select a role to assign. For example, example_role . Click Add selected . Verification In a browser, navigate to the application URL. Log in with your credentials. For example: You get the following output: Users with the required role can log in to your application. Additional resources Assigning permissions and access using roles and groups in Red Hat Single Sign-On | [
"{ \"client-id\" : \"customer-portal\", 1 \"provider-url\" : \"http://localhost:8180/auth/realms/demo\", 2 \"ssl-required\" : \"external\", 3 \"credentials\" : { \"secret\" : \"234234-234234-234234\" 4 } }",
"<login-config> <auth-method>OIDC</auth-method> </login-config>",
"<subsystem xmlns=\"urn:wildfly:elytron-oidc-client:1.0\"> <secure-deployment name=\"DEPLOYMENT_RUNTIME_NAME.war\"> 1 <client-id>customer-portal</client-id> 2 <provider-url>http://localhost:8180/auth/realms/demo</provider-url> 3 <ssl-required>external</ssl-required> 4 <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> 5 </secure-deployment </subsystem>",
"<subsystem xmlns=\"urn:wildfly:elytron-oidc-client:1.0\"> <provider name=\" USD{OpenID_provider_name} \"> <provider-url>http://localhost:8080/auth/realms/demo</provider-url> <ssl-required>external</ssl-required> </provider> <secure-deployment name=\"customer-portal.war\"> 1 <provider> USD{OpenID_provider_name} </provider> <client-id>customer-portal</client-id> <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> </secure-deployment> <secure-deployment name=\"product-portal.war\"> 2 <provider> USD{OpenID_provider_name} </provider> <client-id>product-portal</client-id> <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> </secure-deployment> </subsystem>",
"/extension=org.wildfly.extension.elytron-oidc-client:add",
"/subsystem=elytron-oidc-client:add",
"reload",
"RH_SSO_HOME /bin/standalone.sh -Djboss.socket.binding.port-offset= <offset-number>",
"/home/servers/rh-sso-7.4/bin/standalone.sh -Djboss.socket.binding.port-offset=100",
"{ \"realm\": \"example_realm\", \"auth-server-url\": \"http://localhost:8180/auth/\", \"ssl-required\": \"external\", \"resource\": \"my_jbeap\", \"public-client\": true, \"confidential-port\": 0 }",
"\"provider-url\" : \"http://localhost:8180/auth/realms/example_realm\", \"ssl-required\": \"external\", \"client-id\": \"my_jbeap\", \"public-client\": true, \"confidential-port\": 0",
"mvn archetype:generate -DgroupId= USD{group-to-which-your-application-belongs} -DartifactId= USD{name-of-your-application} -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"mvn archetype:generate -DgroupId=com.example.oidc -DartifactId=simple-oidc-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"cd <name-of-your-application>",
"cd simple-oidc-example",
"<properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <failOnMissingWebXml>false</failOnMissingWebXml> <version.server.bom>4.0.0.GA</version.server.bom> <version.server.bootable-jar>4.0.0.GA</version.server.bootable-jar> <version.wildfly-jar.maven.plugin>4.0.0.GA</version.wildfly-jar.maven.plugin> </properties>",
"<dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>3.1.0.redhat-1</version> <scope>provided</scope> </dependency> </dependencies>",
"<build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>2.1.0.Final</version> </plugin> </plugins> </build>",
"mvn install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.440 s [INFO] Finished at: 2021-12-27T14:45:12+05:30 [INFO] ------------------------------------------------------------------------",
"mkdir -p <application_root> /src/main/java/com/example/oidc",
"mkdir -p simple-oidc-example/src/main/java/com/example/oidc",
"cd <application_root> /src/main/java/com/example/oidc",
"cd simple-oidc-example/src/main/java/com/example/oidc",
"package com.example.oidc; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. * */ @WebServlet(\"/secured\") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println(\"<html>\"); writer.println(\" <head><title>Secured Servlet</title></head>\"); writer.println(\" <body>\"); writer.println(\" <h1>Secured Servlet</h1>\"); writer.println(\" <p>\"); writer.print(\" Current Principal '\"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app>",
"{ \"provider-url\" : \"http://localhost:8180/auth/realms/example_realm\", \"ssl-required\": \"external\", \"client-id\": \"my_jbeap\", \"public-client\": true, \"confidential-port\": 0 }",
"<login-config> <auth-method>OIDC</auth-method> </login-config>",
"/subsystem=elytron-oidc-client/secure-deployment=simple-oidc-example.war/:add(client-id=my_jbeap,provider-url=http://localhost:8180/auth/realms/example_realm,public-client=true,ssl-required=external)",
"mvn package",
"mvn wildfly:deploy",
"username: jane_doe password: janedoep@USDUSD",
"Secured Servlet Current Principal '5cb0c4ca-0477-44c3-bdef-04db04d7e39d'",
"<security-constraint> <auth-constraint> <role-name> <allowed_role> </role-name> </auth-constraint> </security-constraint>",
"<security-constraint> <auth-constraint> <role-name>example_role</role-name> 1 </auth-constraint> </security-constraint>",
"mvn package",
"mvn wildfly:deploy",
"username: jane_doe password: janedoep@USDUSD",
"Forbidden",
"username: jane_doe password: janedoep@USDUSD",
"Secured Servlet Current Principal '5cb0c4ca-0477-44c3-bdef-04db04d7e39d'",
"RH_SSO_HOME /bin/standalone.sh -Djboss.socket.binding.port-offset= <offset-number>",
"/home/servers/rh-sso-7.4/bin/standalone.sh -Djboss.socket.binding.port-offset=100",
"{ \"realm\": \"example_realm\", \"auth-server-url\": \"http://localhost:8180/auth/\", \"ssl-required\": \"external\", \"resource\": \"my_jbeap\", \"public-client\": true, \"confidential-port\": 0 }",
"\"provider-url\" : \"http://localhost:8180/auth/realms/example_realm\", \"ssl-required\": \"external\", \"client-id\": \"my_jbeap\", \"public-client\": true, \"confidential-port\": 0",
"mvn archetype:generate -DgroupId= USD{group-to-which-your-application-belongs} -DartifactId= USD{name-of-your-application} -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"mvn archetype:generate -DgroupId=com.example.oidc -DartifactId=simple-oidc-layer-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false",
"cd <name-of-your-application>",
"cd simple-oidc-layer-example",
"<repositories> <repository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories>",
"<pluginRepositories> <pluginRepository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories>",
"<properties> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <bootable.jar.maven.plugin.version>6.1.2.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>4.0.0.GA-redhat-00002</jboss.xp.galleon.feature.pack.version> </properties>",
"<dependencies> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>3.1.0.redhat-1</version> <scope>provided</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>7.3.4.GA</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.servlet</groupId> <artifactId>jboss-servlet-api_4.0_spec</artifactId> <scope>provided</scope> </dependency> </dependencies> </dependencyManagement>",
"<finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> 1 <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>elytron-oidc-client</layer> 2 </layers> <context-root>false</context-root> 3 </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins>",
"<finalName>simple-oidc-layer-example</finalName>",
"mvn install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19.157 s [INFO] Finished at: 2022-03-10T09:38:21+05:30 [INFO] ------------------------------------------------------------------------",
"mkdir -p <application_root> /src/main/java/com/example/oidc",
"mkdir -p simple-oidc-layer-example/src/main/java/com/example/oidc",
"cd <application_root> /src/main/java/com/example/oidc",
"cd simple-oidc-layer-example/src/main/java/com/example/oidc",
"package com.example.oidc; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. * */ @WebServlet(\"/secured\") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println(\"<html>\"); writer.println(\" <head><title>Secured Servlet</title></head>\"); writer.println(\" <body>\"); writer.println(\" <h1>Secured Servlet</h1>\"); writer.println(\" <p>\"); writer.print(\" Current Principal '\"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>*</role-name> </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app>",
"{ \"provider-url\" : \"http://localhost:8180/auth/realms/example_realm\", \"ssl-required\": \"external\", \"client-id\": \"my_jbeap\", \"public-client\": true, \"confidential-port\": 0 }",
"<login-config> <auth-method>OIDC</auth-method> </login-config>",
"mkdir <application_root> / <cli_script_directory>",
"mkdir simple-oidc-layer-example/scripts/",
"/subsystem=elytron-oidc-client/secure-deployment=simple-oidc-layer-example.war:add(client-id=my_jbeap,provider-url=http://localhost:8180/auth/realms/example_realm,public-client=true,ssl-required=external)",
"<cli-sessions> <cli-session> <script-files> <script>scripts/configure-oidc.cli</script> </script-files> </cli-session> </cli-sessions>",
"mvn package",
"java -jar <application_root> /target/simple-oidc-layer-example-bootable.jar",
"java -jar simple-oidc-layer-example/target/simple-oidc-layer-example-bootable.jar",
"username: jane_doe password: janedoep@USDUSD",
"Secured Servlet Current Principal '5cb0c4ca-0477-44c3-bdef-04db04d7e39d'",
"<security-constraint> <auth-constraint> <role-name> <allowed_role> </role-name> </auth-constraint> </security-constraint>",
"<security-constraint> <auth-constraint> <role-name>example_role</role-name> 1 </auth-constraint> </security-constraint>",
"mvn package",
"java -jar simple-oidc-layer-example/target/simple-oidc-layer-example-bootable.jar",
"username: jane_doe password: janedoep@USDUSD",
"Forbidden",
"username: jane_doe password: janedoep@USDUSD",
"Secured Servlet Current Principal '5cb0c4ca-0477-44c3-bdef-04db04d7e39d'"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_4.0.0/assembly-openid-connect-in-jboss-eap_default |
Chapter 2. New features and enhancements | Chapter 2. New features and enhancements Red Hat JBoss Web Server 6.0 Service Pack 3 does not include any new features or enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_3_release_notes/new_features_and_enhancements |
Chapter 73. KafkaClientAuthenticationTls schema reference | Chapter 73. KafkaClientAuthenticationTls schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationTls schema properties To configure mTLS authentication, set the type property to the value tls . mTLS uses a TLS certificate to authenticate. 73.1. certificateAndKey The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private. You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \ --from-file= MY-PRIVATE.key Note mTLS authentication can only be used with TLS connections. Example mTLS configuration authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key 73.2. KafkaClientAuthenticationTls schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationTls type from KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value tls for the type KafkaClientAuthenticationTls . Property Property type Description certificateAndKey CertAndKeySecretSource Reference to the Secret which holds the certificate and private key pair. type string Must be tls . | [
"create secret generic MY-SECRET --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt --from-file= MY-PRIVATE.key",
"authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaclientauthenticationtls-reference |
Chapter 1. Supported configurations | Chapter 1. Supported configurations MS SQL Server 2019 is supported in JBoss EAP 7.4. PosgreSQL 13.2 and EnterpriseDB 13.1 were tested and are supported in JBoss EAP 7.4. MariaDB 10.3 and MariaDB Galera Cluster 10.3 were tested and are supported in JBoss EAP 7.4. IBM DB2 11.5 was tested and is supported in JBoss EAP 7.4. The Red Hat JBoss Enterprise Application Platform (EAP) 7 Supported Configurations knowledgebase article on the Red Hat Customer Portal lists databases and database connectors that were tested as part of the JBoss EAP 7.4 release. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/7.4.0_release_notes/supported-configurations_default |
Chapter 3. PerformanceProfile [performance.openshift.io/v2] | Chapter 3. PerformanceProfile [performance.openshift.io/v2] Description PerformanceProfile is the Schema for the performanceprofiles API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PerformanceProfileSpec defines the desired state of PerformanceProfile. status object PerformanceProfileStatus defines the observed state of PerformanceProfile. 3.1.1. .spec Description PerformanceProfileSpec defines the desired state of PerformanceProfile. Type object Required cpu nodeSelector Property Type Description additionalKernelArgs array (string) Additional kernel arguments. cpu object CPU defines a set of CPU related parameters. globallyDisableIrqLoadBalancing boolean GloballyDisableIrqLoadBalancing toggles whether IRQ load balancing will be disabled for the Isolated CPU set. When the option is set to "true" it disables IRQs load balancing for the Isolated CPU set. Setting the option to "false" allows the IRQs to be balanced across all CPUs, however the IRQs load balancing can be disabled per pod CPUs when using irq-load-balancing.crio.io/cpu-quota.crio.io annotations. Defaults to "false" hardwareTuning object HardwareTuning defines a set of CPU frequencies for isolated and reserved cpus. hugepages object HugePages defines a set of huge pages related parameters. It is possible to set huge pages with multiple size values at the same time. For example, hugepages can be set with 1G and 2M, both values will be set on the node by the Performance Profile Controller. It is important to notice that setting hugepages default size to 1G will remove all 2M related folders from the node and it will be impossible to configure 2M hugepages under the node. machineConfigLabel object (string) MachineConfigLabel defines the label to add to the MachineConfigs the operator creates. It has to be used in the MachineConfigSelector of the MachineConfigPool which targets this performance profile. Defaults to "machineconfiguration.openshift.io/role=<same role as in NodeSelector label key>" machineConfigPoolSelector object (string) MachineConfigPoolSelector defines the MachineConfigPool label to use in the MachineConfigPoolSelector of resources like KubeletConfigs created by the operator. Defaults to "machineconfiguration.openshift.io/role=<same role as in NodeSelector label key>" net object Net defines a set of network related features nodeSelector object (string) NodeSelector defines the Node label to use in the NodeSelectors of resources like Tuned created by the operator. It most likely should, but does not have to match the node label in the NodeSelector of the MachineConfigPool which targets this performance profile. In the case when machineConfigLabels or machineConfigPoolSelector are not set, we are expecting a certain NodeSelector format <domain>/<role>: "" in order to be able to calculate the default values for the former mentioned fields. numa object NUMA defines options related to topology aware affinities realTimeKernel object RealTimeKernel defines a set of real time kernel related parameters. RT kernel won't be installed when not set. workloadHints object WorkloadHints defines hints for different types of workloads. It will allow defining exact set of tuned and kernel arguments that should be applied on top of the node. 3.1.2. .spec.cpu Description CPU defines a set of CPU related parameters. Type object Required isolated reserved Property Type Description balanceIsolated boolean BalanceIsolated toggles whether or not the Isolated CPU set is eligible for load balancing work loads. When this option is set to "false", the Isolated CPU set will be static, meaning workloads have to explicitly assign each thread to a specific cpu in order to work across multiple CPUs. Setting this to "true" allows workloads to be balanced across CPUs. Setting this to "false" offers the most predictable performance for guaranteed workloads, but it offloads the complexity of cpu load balancing to the application. Defaults to "true" isolated string Isolated defines a set of CPUs that will be used to give to application threads the most execution time possible, which means removing as many extraneous tasks off a CPU as possible. It is important to notice the CPU manager can choose any CPU to run the workload except the reserved CPUs. In order to guarantee that your workload will run on the isolated CPU: 1. The union of reserved CPUs and isolated CPUs should include all online CPUs 2. The isolated CPUs field should be the complementary to reserved CPUs field offlined string Offline defines a set of CPUs that will be unused and set offline reserved string Reserved defines a set of CPUs that will not be used for any container workloads initiated by kubelet. shared string Shared defines a set of CPUs that will be shared among guaranteed workloads that needs additional cpus which are not exclusive, alongside the isolated, exclusive resources that are being used already by those workloads. 3.1.3. .spec.hardwareTuning Description HardwareTuning defines a set of CPU frequencies for isolated and reserved cpus. Type object Property Type Description isolatedCpuFreq integer IsolatedCpuFreq defines a minimum frequency to be set across isolated cpus reservedCpuFreq integer ReservedCpuFreq defines a maximum frequency to be set across reserved cpus 3.1.4. .spec.hugepages Description HugePages defines a set of huge pages related parameters. It is possible to set huge pages with multiple size values at the same time. For example, hugepages can be set with 1G and 2M, both values will be set on the node by the Performance Profile Controller. It is important to notice that setting hugepages default size to 1G will remove all 2M related folders from the node and it will be impossible to configure 2M hugepages under the node. Type object Property Type Description defaultHugepagesSize string DefaultHugePagesSize defines huge pages default size under kernel boot parameters. pages array Pages defines huge pages that we want to allocate at boot time. pages[] object HugePage defines the number of allocated huge pages of the specific size. 3.1.5. .spec.hugepages.pages Description Pages defines huge pages that we want to allocate at boot time. Type array 3.1.6. .spec.hugepages.pages[] Description HugePage defines the number of allocated huge pages of the specific size. Type object Property Type Description count integer Count defines amount of huge pages, maps to the 'hugepages' kernel boot parameter. node integer Node defines the NUMA node where hugepages will be allocated, if not specified, pages will be allocated equally between NUMA nodes size string Size defines huge page size, maps to the 'hugepagesz' kernel boot parameter. 3.1.7. .spec.net Description Net defines a set of network related features Type object Property Type Description devices array Devices contains a list of network device representations that will be set with a netqueue count equal to CPU.Reserved . If no devices are specified then the default is all devices. devices[] object Device defines a way to represent a network device in several options: device name, vendor ID, model ID, PCI path and MAC address userLevelNetworking boolean UserLevelNetworking when enabled - sets either all or specified network devices queue size to the amount of reserved CPUs. Defaults to "false". 3.1.8. .spec.net.devices Description Devices contains a list of network device representations that will be set with a netqueue count equal to CPU.Reserved . If no devices are specified then the default is all devices. Type array 3.1.9. .spec.net.devices[] Description Device defines a way to represent a network device in several options: device name, vendor ID, model ID, PCI path and MAC address Type object Property Type Description deviceID string Network device ID (model) represnted as a 16 bit hexmadecimal number. interfaceName string Network device name to be matched. It uses a syntax of shell-style wildcards which are either positive or negative. vendorID string Network device vendor ID represnted as a 16 bit Hexmadecimal number. 3.1.10. .spec.numa Description NUMA defines options related to topology aware affinities Type object Property Type Description topologyPolicy string Name of the policy applied when TopologyManager is enabled Operator defaults to "best-effort" 3.1.11. .spec.realTimeKernel Description RealTimeKernel defines a set of real time kernel related parameters. RT kernel won't be installed when not set. Type object Property Type Description enabled boolean Enabled defines if the real time kernel packages should be installed. Defaults to "false" 3.1.12. .spec.workloadHints Description WorkloadHints defines hints for different types of workloads. It will allow defining exact set of tuned and kernel arguments that should be applied on top of the node. Type object Property Type Description highPowerConsumption boolean HighPowerConsumption defines if the node should be configured in high power consumption mode. The flag will affect the power consumption but will improve the CPUs latency. Defaults to false. mixedCpus boolean MixedCpus enables the mixed-cpu-node-plugin on the node. Defaults to false. perPodPowerManagement boolean PerPodPowerManagement defines if the node should be configured in per pod power management. PerPodPowerManagement and HighPowerConsumption hints can not be enabled together. Defaults to false. realTime boolean RealTime defines if the node should be configured for the real time workload. Defaults to true. 3.1.13. .status Description PerformanceProfileStatus defines the observed state of PerformanceProfile. Type object Property Type Description conditions array Conditions represents the latest available observations of current state. conditions[] object Condition represents the state of the operator's reconciliation functionality. runtimeClass string RuntimeClass contains the name of the RuntimeClass resource created by the operator. tuned string Tuned points to the Tuned custom resource object that contains the tuning values generated by this operator. 3.1.14. .status.conditions Description Conditions represents the latest available observations of current state. Type array 3.1.15. .status.conditions[] Description Condition represents the state of the operator's reconciliation functionality. Type object Required status type Property Type Description lastHeartbeatTime string lastTransitionTime string message string reason string status string type string ConditionType is the state of the operator's reconciliation functionality. 3.2. API endpoints The following API endpoints are available: /apis/performance.openshift.io/v2/performanceprofiles DELETE : delete collection of PerformanceProfile GET : list objects of kind PerformanceProfile POST : create a PerformanceProfile /apis/performance.openshift.io/v2/performanceprofiles/{name} DELETE : delete a PerformanceProfile GET : read the specified PerformanceProfile PATCH : partially update the specified PerformanceProfile PUT : replace the specified PerformanceProfile /apis/performance.openshift.io/v2/performanceprofiles/{name}/status GET : read status of the specified PerformanceProfile PATCH : partially update status of the specified PerformanceProfile PUT : replace status of the specified PerformanceProfile 3.2.1. /apis/performance.openshift.io/v2/performanceprofiles HTTP method DELETE Description delete collection of PerformanceProfile Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PerformanceProfile Table 3.2. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfileList schema 401 - Unauthorized Empty HTTP method POST Description create a PerformanceProfile Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body PerformanceProfile schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 201 - Created PerformanceProfile schema 202 - Accepted PerformanceProfile schema 401 - Unauthorized Empty 3.2.2. /apis/performance.openshift.io/v2/performanceprofiles/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the PerformanceProfile HTTP method DELETE Description delete a PerformanceProfile Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PerformanceProfile Table 3.9. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PerformanceProfile Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PerformanceProfile Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body PerformanceProfile schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 201 - Created PerformanceProfile schema 401 - Unauthorized Empty 3.2.3. /apis/performance.openshift.io/v2/performanceprofiles/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the PerformanceProfile HTTP method GET Description read status of the specified PerformanceProfile Table 3.16. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PerformanceProfile Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PerformanceProfile Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body PerformanceProfile schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 201 - Created PerformanceProfile schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/node_apis/performanceprofile-performance-openshift-io-v2 |
2.2. Entitlement | 2.2. Entitlement subscription manager component When registering a system with firstboot , the RHN Classic option is checked by default in the Subscription part. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/entitlement |
7.15. RHEA-2014:1516 - new packages: libnetfilter_queue | 7.15. RHEA-2014:1516 - new packages: libnetfilter_queue New libnetfilter_queue packages are now available for Red Hat Enterprise Linux 6. The libnetfilter_queue packages include a user space library providing an API to packets that have been queued by the kernel packet filter. It is part of a system that deprecates the old ip_queue or libipq mechanism. This enhancement update adds the libnetfilter_queue packages to Red Hat Enterprise Linux 6. (BZ# 738244 ) All users who require libnetfilter_queue are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1516 |
Chapter 3. Preparing Storage for Red Hat Virtualization | Chapter 3. Preparing Storage for Red Hat Virtualization Prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. Self-hosted engines must have an additional data domain dedicated to the Manager virtual machine. This domain is created during the self-hosted engine deployment, and must be at least 74 GiB. You must prepare the storage for this domain before beginning the deployment. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Important If you are using iSCSI storage, the self-hosted engine storage domain must use its own iSCSI target. Any additional storage domains must use a different iSCSI target. Warning Creating additional data storage domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine. 3.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up and configuring NFS, see Network File System (NFS) in the Red Hat Enterprise Linux 7 Storage Administration Guide . For information on how to export an 'NFS' share, see How to export 'NFS' share from NetApp Storage / EMC SAN in Red Hat Virtualization Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Procedure Create the group kvm : Create the user vdsm in the group kvm : Set the ownership of your exported directory to 36:36, which gives vdsm:kvm ownership: Change the mode of the directory so that read and write access is granted to the owner, and so that read and execute access is granted to the group and other users: 3.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Online Storage Management in the Red Hat Enterprise Linux 7 Storage Administration Guide . Important If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: 3.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: 3.4. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261 . 3.5. Customizing Multipath Configurations for SAN Vendors To customize the multipath configuration settings, do not modify /etc/multipath.conf . Instead, create a new configuration file that overrides /etc/multipath.conf . Warning Upgrading Virtual Desktop and Server Manager (VDSM) overwrites the /etc/multipath.conf file. If multipath.conf contains customizations, overwriting it can trigger storage issues. Prerequisites This topic only applies to systems that have been configured to use multipath connections storage domains, and therefore have a /etc/multipath.conf file. Do not override the user_friendly_names and find_multipaths settings. For more information, see Section 3.6, "Recommended Settings for Multipath.conf" Avoid overriding no_path_retry and polling_interval unless required by the storage vendor. For more information, see Section 3.6, "Recommended Settings for Multipath.conf" Procedure To override the values of settings in /etc/multipath.conf , create a new configuration file in the /etc/multipath/conf.d/ directory. Note The files in /etc/multipath/conf.d/ execute in alphabetical order. Follow the convention of naming the file with a number at the beginning of its name. For example, /etc/multipath/conf.d/90-myfile.conf . Copy the settings you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/ . Edit the setting values and save your changes. Apply the new configuration settings by entering the systemctl reload multipathd command. Note Avoid restarting the multipathd service. Doing so generates errors in the VDSM logs. Verification steps If you override the VDSM-generated settings in /etc/multipath.conf , verify that the new configuration performs as expected in a variety of failure scenarios. For example, disable all of the storage connections. Then enable one connection at a time and verify that doing so makes the storage domain reachable. Troubleshooting If a Red Hat Virtualization Host has trouble accessing shared storage, check /etc/multpath.conf and files under /etc/multipath/conf.d/ for values that are incompatible with the SAN. Additional resources Red Hat Enterprise Linux DM Multipath in the RHEL documentation. Configuring iSCSI Multipathing in the Administration Guide. How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? on the Red Hat Customer Portal, which shows an example multipath.conf file and was the basis for this topic. 3.6. Recommended Settings for Multipath.conf When overriding /etc/multipath.conf , Do not override the following settings: user_friendly_names no This setting controls whether user-friendly names are assigned to devices in addition to the actual device names. Multiple hosts must use the same name to access devices. Disabling this setting prevents user-friendly names from interfering with this requirement. find_multipaths no This setting controls whether RHVH tries to access all devices through multipath, even if only one path is available. Disabling this setting prevents RHV from using the too-clever behavior when this setting is enabled. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner. Before backing up the Manager, ensure it is updated to the latest minor version. The Manager version in the backup file must match the version of the new Manager. | [
"groupadd kvm -g 36",
"useradd vdsm -u 36 -g 36",
"chown -R 36:36 /exports/data",
"chmod 0755 /exports/data",
"cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }",
"cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/preparing_storage_for_rhv_migrating_to_she |
Chapter 1. Quay.io overview | Chapter 1. Quay.io overview Quay.io is a registry for storing, building, and distributing container images and other OCI artifacts. This robust and feature-rich container registry service has gained widespread popularity among developers, organizations, and enterprises, for establishing itself as one of the pioneering platforms in the containerization ecosystem. It offers both free and paid tiers to cater to various user needs. At its core, Quay.io serves as a centralized repository for storing, managing, and distributing container images. One primary advantage of Quay.io is its flexibility and ease of use. It offers an intuitive web interface that allows users to quickly upload and manage their container images. Developers can create private repositories, ensuring sensitive or proprietary code remains secure within their organization. Additionally, users can set up access controls and manage team collaboration, enabling seamless sharing of container images among designated team members. Quay.io addresses container security concerns through its integrated image scanner, Clair . The service automatically scans container images for known vulnerabilities and security issues, providing developers with valuable insights into potential risks and suggesting remediation steps. Quay.io excels in automation, and supports integration with popular Continuous Integration/Continuous Deployment (CI/CD) tools and platforms, enabling seamless automation of the container build and deployment processes. As a result, developers can streamline their workflows, significantly reducing manual intervention and improving overall development efficiency. Quay.io caters to the needs of both large and small-scale deployments. Its robust architecture and support for high availability ensures that organizations can rely on it for mission-critical applications. The platform can handle significant container image traffic and offers efficient replication and distribution mechanisms to deliver container images to various geographical locations. Quay.io has established itself as an active hub for container enthusiasts. Developers can discover a vast collection of pre-built, public container images shared by other users, making it easier to find useful tools, applications, and services for their projects. This open sharing ecosystem fosters collaboration and accelerates software development within the container community. As containerization continues to gain momentum in the software development landscape, Quay.io remains at the forefront, continually improving and expanding its services. The platform's commitment to security, ease of use, automation, and community engagement has solidified its position as a preferred container registry service for both individual developers and large organizations alike. As technology evolves, it is crucial to verify the latest features and updates on the Quay.io platform through its official website or other reliable sources. Whether you are an individual developer, part of a team, or representing an enterprise, Quay.io can enhance your containerization experience and streamline your journey towards building and deploying modern applications with ease. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/about_quay_io/quayio-overview |
Chapter 14. Networking Tapset | Chapter 14. Networking Tapset This family of probe points is used to probe the activities of the network device and protocol layers. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/networking-dot-stp |
Chapter 3. Planning the replica topology | Chapter 3. Planning the replica topology Review guidance on determining the appropriate replica topology for your use case. 3.1. Multiple replica servers as a solution for high performance and disaster recovery You can achieve continuous functionality and high-availability of Identity Management (IdM) services by creating replicas of the existing IdM servers. When you create an appropriate number of IdM replicas, you can use load balancing to distribute client requests across multiple servers to optimize performance of IdM services. With IdM, you can place additional servers in geographically dispersed data centers to reflect your enterprise organizational structure. In this way, the path between IdM clients and the nearest accessible server is shortened. In addition, having multiple servers allows spreading the load and scaling for more clients. Replicating IdM servers is also a common backup mechanism to mitigate or prevent server loss. For example, if one server fails, the remaining servers continue providing services to the domain. You can also recover the lost server by creating a new replica based on one of the remaining servers. 3.2. Introduction to IdM servers and clients The Identity Management (IdM) domain includes the following types of systems: IdM clients IdM clients are Red Hat Enterprise Linux systems enrolled with the servers and configured to use the IdM services on these servers. Clients interact with the IdM servers to access services provided by them. For example, clients use the Kerberos protocol to perform authentication and acquire tickets for enterprise single sign-on (SSO), use LDAP to get identity and policy information, and use DNS to detect where the servers and services are located and how to connect to them. IdM servers IdM servers are Red Hat Enterprise Linux systems that respond to identity, authentication, and authorization requests from IdM clients within an IdM domain. IdM servers are the central repositories for identity and policy information. They can also host any of the optional services used by domain members: Certificate authority (CA): This service is present in most IdM deployments. Key Recovery Authority (KRA) DNS Active Directory (AD) trust controller Active Directory (AD) trust agent IdM servers are also embedded IdM clients. As clients enrolled with themselves, the servers provide the same functionality as other clients. To provide services for large numbers of clients, as well as for redundancy and availability, IdM allows deployment on multiple IdM servers in a single domain. It is possible to deploy up to 60 servers. This is the maximum number of IdM servers, also called replicas, that is currently supported in the IdM domain. When creating a replica, IdM clones the configuration of the existing server. A replica shares with the initial server its core configuration, including internal information about users, systems, certificates, and configured policies. NOTE A replica and the server it was created from are functionally identical, except for the CA renewal and CRL publisher roles. Therefore, the term server and replica are used interchangeably in RHEL IdM documentation, depending on the context. However, different IdM servers can provide different services for the client, if so configured. Core components like Kerberos and LDAP are available on every server. Other services like CA, DNS, Trust Controller or Vault are optional. This means that different IdM servers can have distinct roles in the deployment. If your IdM topology contains an integrated CA, one server has the role of the Certificate revocation list (CRL) publisher server and one server has the role of the CA renewal server . By default, the first CA server installed fulfills these two roles, but you can assign these roles to separate servers. Warning The CA renewal server is critical for your IdM deployment because it is the only system in the domain responsible for tracking CA subsystem certificates and keys . For details about how to recover from a disaster affecting your IdM deployment, see Performing disaster recovery with Identity Management . NOTE All IdM servers (for clients, see Supported versions of RHEL for installing IdM clients ) must be running on the same major and minor version of RHEL. Do not spend more than several days applying z-stream updates or upgrading the IdM servers in your topology. For details about how to apply Z-stream fixes and upgrade your servers, see Updating IdM packages . For details about how to migrate to IdM on RHEL 8, see Migrating your IdM environment from RHEL 7 servers to RHEL 8 servers . 3.3. Replication agreements between IdM replicas When an administrator creates a replica based on an existing server, Identity Management (IdM) creates a replication agreement between the initial server and the replica. The replication agreement ensures that the data and configuration is continuously replicated between the two servers. IdM uses multiple read/write replica replication . In this configuration, all replicas joined in a replication agreement receive and provide updates, and are therefore considered suppliers and consumers. Replication agreements are always bilateral. Figure 3.1. Server and replica agreements IdM uses two types of replication agreements: Domain replication agreements replicate the identity information. Certificate replication agreements replicate the certificate information. Both replication channels are independent. Two servers can have one or both types of replication agreements configured between them. For example, when server A and server B have only domain replication agreement configured, only identity information is replicated between them, not the certificate information. 3.4. Guidelines for determining the appropriate number of IdM replicas in a topology Plan IdM topology to match your organization's requirements and ensure optimal performance and service availability. Set up at least two replicas in each data center Deploy at least two replicas in each data center to ensure that if one server fails, the replica can take over and handle requests. Set up a sufficient number of servers to serve your clients One Identity Management (IdM) server can provide services to 2000 - 3000 clients. This assumes the clients query the servers multiple times a day, but not, for example, every minute. If you expect frequent queries, plan for more servers. Set up a sufficient number of Certificate Authority (CA) replicas Only replicas with the CA role installed can replicate certificate data. If you use the IdM CA, ensure your environment has at least two CA replicas with certificate replication agreements between them. Set up a maximum of 60 replicas in a single IdM domain Red Hat supports environments with up to 60 replicas. 3.5. Guidelines for connecting IdM replicas in a topology Connect each replica to at least two other replicas This ensures that information is replicated not just between the initial replica and the first server you installed, but between other replicas as well. Connect a replica to a maximum of four other replicas (not a hard requirement) A large number of replication agreements per server does not add significant benefits. A receiving replica can only be updated by one other replica at a time and meanwhile, the other replication agreements are idle. More than four replication agreements per replica typically means a waste of resources. Note This recommendation applies to both certificate replication and domain replication agreements. There are two exceptions to the limit of four replication agreements per replica: You want failover paths if certain replicas are not online or responding. In larger deployments, you want additional direct links between specific nodes. Configuring a high number of replication agreements can have a negative impact on overall performance: when multiple replication agreements in the topology are sending updates, certain replicas can experience a high contention on the changelog database file between incoming updates and the outgoing updates. If you decide to use more replication agreements per replica, ensure that you do not experience replication issues and latency. However, note that large distances and high numbers of intermediate nodes can also cause latency problems. Connect the replicas in a data center with each other This ensures domain replication within the data center. Connect each data center to at least two other data centers This ensures domain replication between data centers. Connect data centers using at least a pair of replication agreements If data centers A and B have a replication agreement from A1 to B1, having a replication agreement from A2 to B2 ensures that if one of the servers is down, the replication can continue between the two data centers. 3.6. Replica topology examples You can create a reliable replica topology by using one of the following examples. Figure 3.2. Replica topology with four data centers, each with four servers that are connected with replication agreements Figure 3.3. Replica topology with three data centers, each with a different number of servers that are all interconnected through replication agreements 3.7. The hidden replica mode A hidden replica is an IdM server that has all services running and available. However, a hidden replica has no SRV records in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect hidden replicas. By default, when you set up a replica, the installation program automatically creates service (SRV) resource records for it in DNS. These records enable clients to auto-discover the replica and its services. When installing a replica as hidden, add the --hidden-replica parameter to the ipa-replica-install command. Note The hidden replica feature, introduced in RHEL 8.1 as a Technology Preview, is fully supported starting with RHEL 8.2. Hidden replicas are primarily designed for dedicated services that might disrupt clients. For example, a full backup of IdM requires shutting down all IdM services on the server. As no clients use a hidden replica, administrators can temporarily shut down the services on this host without affecting any clients. Other use cases include high-load operations on the IdM API or the LDAP server, such as a mass import or extensive queries. Before backing up a hidden replica, you must install all required server roles used in a cluster, especially the Certificate Authority role if the integrated CA is used. Therefore, restoring a backup from a hidden replica on a new host always results in a regular replica. Additional resources Installing an Identity Management replica Backing up and restoring IdM Demoting or promoting hidden replicas | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/planning_identity_management/planning-the-replica-topology_planning-identity-management |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/standalone_deployment_guide/proc_providing-feedback-on-red-hat-documentation |
3.2. Mounting a GFS2 File System | 3.2. Mounting a GFS2 File System Note You should always use Pacemaker to manage the GFS2 file system in a production environment rather than manually mounting the file system with a mount command, as this may cause issues at system shutdown as described in Section 3.3, "Unmounting a GFS2 File System" . Before you can mount a GFS2 file system, the file system must exist (see Section 3.1, "Creating a GFS2 File System" ), the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started (see Configuring and Managing a Red Hat Cluster ). After those requirements have been met, you can mount the GFS2 file system as you would any Linux file system. For proper operation of the GFS2 file system, the gfs2-utils package must be installed on all nodes that mount a GFS2 file system. The gfs2-utils package is part of the Resilient Storage channel. To manipulate file ACLs, you must mount the file system with the -o acl mount option. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl ), but are not allowed to set them (with setfacl ). Usage Mounting Without ACL Manipulation Mounting With ACL Manipulation -o acl GFS2-specific option to allow manipulating file ACLs. BlockDevice Specifies the block device where the GFS2 file system resides. MountPoint Specifies the directory where the GFS2 file system should be mounted. Example In this example, the GFS2 file system on /dev/vg01/lvol0 is mounted on the /mygfs2 directory. Complete Usage The -o option argument consists of GFS2-specific options (see Table 3.2, "GFS2-Specific Mount Options" ) or acceptable standard Linux mount -o options, or a combination of both. Multiple option parameters are separated by a comma and no spaces. Note The mount command is a Linux system command. In addition to using GFS2-specific options described in this section, you can use other, standard, mount command options (for example, -r ). For information about other Linux mount command options, see the Linux mount man page. Table 3.2, "GFS2-Specific Mount Options" describes the available GFS2-specific -o option values that can be passed to GFS2 at mount time. Note This table includes descriptions of options that are used with local file systems only. Note, however, that as of the Red Hat Enterprise Linux 6 release, Red Hat does not support the use of GFS2 as a single-node file system. Red Hat will continue to support single-node GFS2 file systems for mounting snapshots of cluster file systems (for example, for backup purposes). Table 3.2. GFS2-Specific Mount Options Option Description acl Allows manipulating file ACLs. If a file system is mounted without the acl mount option, users are allowed to view ACLs (with getfacl ), but are not allowed to set them (with setfacl ). data=[ordered|writeback] When data=ordered is set, the user data modified by a transaction is flushed to the disk before the transaction is committed to disk. This should prevent the user from seeing uninitialized blocks in a file after a crash. When data=writeback mode is set, the user data is written to the disk at any time after it is dirtied; this does not provide the same consistency guarantee as ordered mode, but it should be slightly faster for some workloads. The default value is ordered mode. ignore_local_fs Caution: This option should not be used when GFS2 file systems are shared. Forces GFS2 to treat the file system as a multi-host file system. By default, using lock_nolock automatically turns on the localflocks flag. localflocks Caution: This option should not be used when GFS2 file systems are shared. Tells GFS2 to let the VFS (virtual file system) layer do all flock and fcntl. The localflocks flag is automatically turned on by lock_nolock . lockproto= LockModuleName Allows the user to specify which locking protocol to use with the file system. If LockModuleName is not specified, the locking protocol name is read from the file system superblock. locktable= LockTableName Allows the user to specify which locking table to use with the file system. quota=[off/account/on] Turns quotas on or off for a file system. Setting the quotas to be in the account state causes the per UID/GID usage statistics to be correctly maintained by the file system; limit and warn values are ignored. The default value is off . errors=panic|withdraw When errors=panic is specified, file system errors will cause a kernel panic. When errors=withdraw is specified, which is the default behavior, file system errors will cause the system to withdraw from the file system and make it inaccessible until the reboot; in some cases the system may remain running. discard/nodiscard Causes GFS2 to generate "discard" I/O requests for blocks that have been freed. These can be used by suitable hardware to implement thin provisioning and similar schemes. barrier/nobarrier Causes GFS2 to send I/O barriers when flushing the journal. The default value is on . This option is automatically turned off if the underlying device does not support I/O barriers. Use of I/O barriers with GFS2 is highly recommended at all times unless the block device is designed so that it cannot lose its write cache content (for example, if it is on a UPS or it does not have a write cache). quota_quantum= secs Sets the number of seconds for which a change in the quota information may sit on one node before being written to the quota file. This is the preferred way to set this parameter. The value is an integer number of seconds greater than zero. The default is 60 seconds. Shorter settings result in faster updates of the lazy quota information and less likelihood of someone exceeding their quota. Longer settings make file system operations involving quotas faster and more efficient. statfs_quantum= secs Setting statfs_quantum to 0 is the preferred way to set the slow version of statfs . The default value is 30 secs which sets the maximum time period before statfs changes will be synced to the master statfs file. This can be adjusted to allow for faster, less accurate statfs values or slower more accurate values. When this option is set to 0, statfs will always report the true values. statfs_percent= value Provides a bound on the maximum percentage change in the statfs information on a local basis before it is synced back to the master statfs file, even if the time period has not expired. If the setting of statfs_quantum is 0, then this setting is ignored. | [
"mount BlockDevice MountPoint",
"mount -o acl BlockDevice MountPoint",
"mount /dev/vg01/lvol0 /mygfs2",
"mount BlockDevice MountPoint -o option"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/s1-manage-mountfs |
Chapter 4. Planning your environment according to object maximums | Chapter 4. Planning your environment according to object maximums Consider the following tested object maximums when you plan your OpenShift Container Platform cluster. These guidelines are based on the largest possible cluster. For smaller clusters, the maximums are lower. There are many factors that influence the stated thresholds, including the etcd version or storage data format. In most cases, exceeding these numbers results in lower overall performance. It does not necessarily mean that the cluster will fail. Warning Clusters that experience rapid change, such as those with many starting and stopping pods, can have a lower practical maximum size than documented. 4.1. OpenShift Container Platform tested cluster maximums for major releases Note Red Hat does not provide direct guidance on sizing your OpenShift Container Platform cluster. This is because determining whether your cluster is within the supported bounds of OpenShift Container Platform requires careful consideration of all the multidimensional factors that limit the cluster scale. OpenShift Container Platform supports tested cluster maximums rather than absolute cluster maximums. Not every combination of OpenShift Container Platform version, control plane workload, and network plugin are tested, so the following table does not represent an absolute expectation of scale for all deployments. It might not be possible to scale to a maximum on all dimensions simultaneously. The table contains tested maximums for specific workload and deployment configurations, and serves as a scale guide as to what can be expected with similar deployments. Maximum type 4.x tested maximum Number of nodes 2,000 [1] Number of pods [2] 150,000 Number of pods per node 2,500 [3] Number of pods per core There is no default value. Number of namespaces [4] 10,000 Number of builds 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy Number of pods per namespace [5] 25,000 Number of routes and back ends per Ingress Controller 2,000 per router Number of secrets 80,000 Number of config maps 90,000 Number of services [6] 10,000 Number of services per namespace 5,000 Number of back-ends per service 5,000 Number of deployments per namespace [5] 2,000 Number of build configs 12,000 Number of custom resource definitions (CRD) 1,024 [7] Pause pods were deployed to stress the control plane components of OpenShift Container Platform at 2000 node scale. The ability to scale to similar numbers will vary depending upon specific deployment and workload parameters. The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements. This was tested on a cluster with 31 servers: 3 control planes, 2 infrastructure nodes, and 26 worker nodes. If you need 2,500 user pods, you need both a hostPrefix of 20 , which allocates a network large enough for each node to contain more than 2000 pods, and a custom kubelet config with maxPods set to 2500 . For more information, see Running 2500 pods per node on OCP 4.13 . When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to free etcd storage. There are several control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. Each service port and each service back-end has a corresponding entry in iptables . The number of back-ends of a given service impact the size of the Endpoints objects, which impacts the size of data that is being sent all over the system. Tested on a cluster with 29 servers: 3 control planes, 2 infrastructure nodes, and 24 worker nodes. The cluster had 500 namespaces. OpenShift Container Platform has a limit of 1,024 total custom resource definitions (CRD), including those installed by OpenShift Container Platform, products integrating with OpenShift Container Platform and user-created CRDs. If there are more than 1,024 CRDs created, then there is a possibility that oc command requests might be throttled. 4.1.1. Example scenario As an example, 500 worker nodes (m5.2xl) were tested, and are supported, using OpenShift Container Platform 4.17, the OVN-Kubernetes network plugin, and the following workload objects: 200 namespaces, in addition to the defaults 60 pods per node; 30 server and 30 client pods (30k total) 57 image streams/ns (11.4k total) 15 services/ns backed by the server pods (3k total) 15 routes/ns backed by the services (3k total) 20 secrets/ns (4k total) 10 config maps/ns (2k total) 6 network policies/ns, including deny-all, allow-from ingress and intra-namespace rules 57 builds/ns The following factors are known to affect cluster workload scaling, positively or negatively, and should be factored into the scale numbers when planning a deployment. For additional information and guidance, contact your sales representative or Red Hat support . Number of pods per node Number of containers per pod Type of probes used (for example, liveness/readiness, exec/http) Number of network policies Number of projects, or namespaces Number of image streams per project Number of builds per project Number of services/endpoints and type Number of routes Number of shards Number of secrets Number of config maps Rate of API calls, or the cluster "churn", which is an estimation of how quickly things change in the cluster configuration. Prometheus query for pod creation requests per second over 5 minute windows: sum(irate(apiserver_request_count{resource="pods",verb="POST"}[5m])) Prometheus query for all API requests per second over 5 minute windows: sum(irate(apiserver_request_count{}[5m])) Cluster node resource consumption of CPU Cluster node resource consumption of memory 4.2. OpenShift Container Platform environment and configuration on which the cluster maximums are tested 4.2.1. AWS cloud platform Node Flavor vCPU RAM(GiB) Disk type Disk size(GiB)/IOS Count Region Control plane/etcd [1] r5.4xlarge 16 128 gp3 220 3 us-west-2 Infra [2] m5.12xlarge 48 192 gp3 100 3 us-west-2 Workload [3] m5.4xlarge 16 64 gp3 500 [4] 1 us-west-2 Compute m5.2xlarge 8 32 gp3 100 3/25/250/500 [5] us-west-2 gp3 disks with a baseline performance of 3000 IOPS and 125 MiB per second are used for control plane/etcd nodes because etcd is latency sensitive. gp3 volumes do not use burst performance. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale. Workload node is dedicated to run performance and scalability workload generators. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts. 4.2.2. IBM Power platform Node vCPU RAM(GiB) Disk type Disk size(GiB)/IOS Count Control plane/etcd [1] 16 32 io1 120 / 10 IOPS per GiB 3 Infra [2] 16 64 gp2 120 2 Workload [3] 16 256 gp2 120 [4] 1 Compute 16 64 gp2 120 2 to 100 [5] io1 disks with 120 / 10 IOPS per GiB are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale. Workload node is dedicated to run performance and scalability workload generators. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run. Cluster is scaled in iterations. 4.2.3. IBM Z platform Node vCPU [4] RAM(GiB) [5] Disk type Disk size(GiB)/IOS Count Control plane/etcd [1,2] 8 32 ds8k 300 / LCU 1 3 Compute [1,3] 8 32 ds8k 150 / LCU 2 4 nodes (scaled to 100/250/500 pods per node) Nodes are distributed between two logical control units (LCUs) to optimize disk I/O load of the control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Etcd I/O demand should not interfere with other workloads. Four compute nodes are used for the tests running several iterations with 100/250/500 pods at the same time. First, idling pods were used to evaluate if pods can be instanced. , a network and CPU demanding client/server workload were used to evaluate the stability of the system under stress. Client and server pods were pairwise deployed and each pair was spread over two compute nodes. No separate workload node was used. The workload simulates a microservice workload between two compute nodes. Physical number of processors used is six Integrated Facilities for Linux (IFLs). Total physical memory used is 512 GiB. 4.3. How to plan your environment according to tested cluster maximums Important Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster. The numbers noted in this documentation are based on Red Hat's test methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments. While planning your environment, determine how many pods are expected to fit per node: The default maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application's memory, CPU, and storage requirements, as described in "How to plan your environment according to application requirements". Example scenario If you want to scope your cluster for 2200 pods per cluster, you would need at least five nodes, assuming that there are 500 maximum pods per node: If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node: Where: OpenShift Container Platform comes with several system pods, such as OVN-Kubernetes, DNS, Operators, and others, which run across every worker node by default. Therefore, the result of the above formula can vary. 4.4. How to plan your environment according to application requirements Consider an example application environment: Pod type Pod quantity Max memory CPU cores Persistent storage apache 100 500 MB 0.5 1 GB node.js 200 1 GB 1 1 GB postgresql 100 1 GB 2 10 GB JBoss EAP 100 1 GB 1 1 GB Extrapolated requirements: 550 CPU cores, 450GB RAM, and 1.4TB storage. Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered. Node type Quantity CPUs RAM (GB) Nodes (option 1) 100 4 16 Nodes (option 2) 50 8 32 Nodes (option 3) 25 16 64 Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio. The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing. Disable the service links in the deployment's service specification file to overcome this: --- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: "USD{IMAGE}" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR2_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR3_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR4_USD{IDENTIFIER} value: "USD{ENV_VALUE}" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: "[A-Za-z0-9]{255}" required: false labels: template: deployment-config-template The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX on the system defines the maximum argument length for a new process and it is set to 2097152 bytes (2 MiB) by default. The Kubelet injects environment variables in to each pod scheduled to run in the namespace including: <SERVICE_NAME>_SERVICE_HOST=<IP> <SERVICE_NAME>_SERVICE_PORT=<PORT> <SERVICE_NAME>_PORT=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp <SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR> The pods in the namespace will start to fail if the argument length exceeds the allowed value and the number of characters in a service name impacts it. For example, in a namespace with 5000 services, the limit on the service name is 33 characters, which enables you to run 5000 pods in the namespace. | [
"required pods per cluster / pods per node = total number of nodes needed",
"2200 / 500 = 4.4",
"2200 / 20 = 110",
"required pods per cluster / total number of nodes = expected pods per node",
"--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/planning-your-environment-according-to-object-maximums |
Chapter 2. Creating embedded caches | Chapter 2. Creating embedded caches Data Grid provides an EmbeddedCacheManager API that lets you control both the Cache Manager and embedded cache lifecycles programmatically. 2.1. Adding Data Grid to your project Add Data Grid to your project to create embedded caches in your applications. Prerequisites Configure your project to get Data Grid artifacts from the Maven repository. Procedure Add the infinispan-core artifact as a dependency in your pom.xml as follows: <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> </dependency> </dependencies> 2.2. Creating and using embedded caches Data Grid provides a GlobalConfigurationBuilder API that controls the Cache Manager and a ConfigurationBuilder API that configures caches. Prerequisites Add the infinispan-core artifact as a dependency in your pom.xml . Procedure Initialize a CacheManager . Note You must always call the cacheManager.start() method to initialize a CacheManager before you can create caches. Default constructors do this for you but there are overloaded versions of the constructors that do not. Cache Managers are also heavyweight objects and Data Grid recommends instantiating only one instance per JVM. Use the ConfigurationBuilder API to define cache configuration. Obtain caches with getCache() , createCache() , or getOrCreateCache() methods. Data Grid recommends using the getOrCreateCache() method because it either creates a cache on all nodes or returns an existing cache. If necessary use the PERMANENT flag for caches to survive restarts. Stop the CacheManager by calling the cacheManager.stop() method to release JVM resources and gracefully shutdown any caches. // Set up a clustered Cache Manager. GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); // Initialize the default Cache Manager. DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); // Create a distributed cache with synchronous replication. ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC); // Obtain a volatile cache. Cache<String, String> cache = cacheManager.administration().withFlags(CacheContainerAdmin.AdminFlag.VOLATILE).getOrCreateCache("myCache", builder.build()); // Stop the Cache Manager. cacheManager.stop(); getCache() method Invoke the getCache(String) method to obtain caches, as follows: Cache<String, String> myCache = manager.getCache("myCache"); The preceding operation creates a cache named myCache , if it does not already exist, and returns it. Using the getCache() method creates the cache only on the node where you invoke the method. In other words, it performs a local operation that must be invoked on each node across the cluster. Typically, applications deployed across multiple nodes obtain caches during initialization to ensure that caches are symmetric and exist on each node. createCache() method Invoke the createCache() method to create caches dynamically across the entire cluster. Cache<String, String> myCache = manager.administration().createCache("myCache", "myTemplate"); The preceding operation also automatically creates caches on any nodes that subsequently join the cluster. Caches that you create with the createCache() method are ephemeral by default. If the entire cluster shuts down, the cache is not automatically created again when it restarts. PERMANENT flag Use the PERMANENT flag to ensure that caches can survive restarts. Cache<String, String> myCache = manager.administration().withFlags(AdminFlag.PERMANENT).createCache("myCache", "myTemplate"); For the PERMANENT flag to take effect, you must enable global state and set a configuration storage provider. For more information about configuration storage providers, see GlobalStateConfigurationBuilder#configurationStorage() . Additional resources EmbeddedCacheManager EmbeddedCacheManager Configuration org.infinispan.configuration.global.GlobalConfiguration org.infinispan.configuration.cache.ConfigurationBuilder 2.3. Cache API Data Grid provides a Cache interface that exposes simple methods for adding, retrieving and removing entries, including atomic mechanisms exposed by the JDK's ConcurrentMap interface. Based on the cache mode used, invoking these methods will trigger a number of things to happen, potentially even including replicating an entry to a remote node or looking up an entry from a remote node, or potentially a cache store. For simple usage, using the Cache API should be no different from using the JDK Map API, and hence migrating from simple in-memory caches based on a Map to Data Grid's Cache should be trivial. Performance Concerns of Certain Map Methods Certain methods exposed in Map have certain performance consequences when used with Data Grid, such as size() , values() , keySet() and entrySet() . Specific methods on the keySet , values and entrySet are fine for use please see their Javadoc for further details. Attempting to perform these operations globally would have large performance impact as well as become a scalability bottleneck. As such, these methods should only be used for informational or debugging purposes only. It should be noted that using certain flags with the withFlags() method can mitigate some of these concerns, please check each method's documentation for more details. Mortal and Immortal Data Further to simply storing entries, Data Grid's cache API allows you to attach mortality information to data. For example, simply using put(key, value) would create an immortal entry, i.e., an entry that lives in the cache forever, until it is removed (or evicted from memory to prevent running out of memory). If, however, you put data in the cache using put(key, value, lifespan, timeunit) , this creates a mortal entry, i.e., an entry that has a fixed lifespan and expires after that lifespan. In addition to lifespan , Data Grid also supports maxIdle as an additional metric with which to determine expiration. Any combination of lifespans or maxIdles can be used. putForExternalRead operation Data Grid's Cache class contains a different 'put' operation called putForExternalRead . This operation is particularly useful when Data Grid is used as a temporary cache for data that is persisted elsewhere. Under heavy read scenarios, contention in the cache should not delay the real transactions at hand, since caching should just be an optimization and not something that gets in the way. To achieve this, putForExternalRead() acts as a put call that only operates if the key is not present in the cache, and fails fast and silently if another thread is trying to store the same key at the same time. In this particular scenario, caching data is a way to optimise the system and it's not desirable that a failure in caching affects the on-going transaction, hence why failure is handled differently. putForExternalRead() is considered to be a fast operation because regardless of whether it's successful or not, it doesn't wait for any locks, and so returns to the caller promptly. To understand how to use this operation, let's look at basic example. Imagine a cache of Person instances, each keyed by a PersonId , whose data originates in a separate data store. The following code shows the most common pattern of using putForExternalRead within the context of this example: // Id of the person to look up, provided by the application PersonId id = ...; // Get a reference to the cache where person instances will be stored Cache<PersonId, Person> cache = ...; // First, check whether the cache contains the person instance // associated with with the given id Person cachedPerson = cache.get(id); if (cachedPerson == null) { // The person is not cached yet, so query the data store with the id Person person = dataStore.lookup(id); // Cache the person along with the id so that future requests can // retrieve it from memory rather than going to the data store cache.putForExternalRead(id, person); } else { // The person was found in the cache, so return it to the application return cachedPerson; } Note that putForExternalRead should never be used as a mechanism to update the cache with a new Person instance originating from application execution (i.e. from a transaction that modifies a Person's address). When updating cached values, please use the standard put operation, otherwise the possibility of caching corrupt data is likely. 2.3.1. AdvancedCache API In addition to the simple Cache interface, Data Grid offers an AdvancedCache interface, geared towards extension authors. The AdvancedCache offers the ability to access certain internal components and to apply flags to alter the default behavior of certain cache methods. The following code snippet depicts how an AdvancedCache can be obtained: AdvancedCache advancedCache = cache.getAdvancedCache(); 2.3.1.1. Flags Flags are applied to regular cache methods to alter the behavior of certain methods. For a list of all available flags, and their effects, see the Flag enumeration. Flags are applied using AdvancedCache.withFlags() . This builder method can be used to apply any number of flags to a cache invocation, for example: advancedCache.withFlags(Flag.CACHE_MODE_LOCAL, Flag.SKIP_LOCKING) .withFlags(Flag.FORCE_SYNCHRONOUS) .put("hello", "world"); 2.3.2. Asynchronous API In addition to synchronous API methods like Cache.put() , Cache.remove() , etc., Data Grid also has an asynchronous, non-blocking API where you can achieve the same results in a non-blocking fashion. These methods are named in a similar fashion to their blocking counterparts, with "Async" appended. E.g., Cache.putAsync() , Cache.removeAsync() , etc. These asynchronous counterparts return a CompletableFuture that contains the actual result of the operation. For example, in a cache parameterized as Cache<String, String> , Cache.put(String key, String value) returns String while Cache.putAsync(String key, String value) returns CompletableFuture<String> . 2.3.2.1. Why use such an API? Non-blocking APIs are powerful in that they provide all of the guarantees of synchronous communications - with the ability to handle communication failures and exceptions - with the ease of not having to block until a call completes. This allows you to better harness parallelism in your system. For example: Set<CompletableFuture<?>> futures = new HashSet<>(); futures.add(cache.putAsync(key1, value1)); // does not block futures.add(cache.putAsync(key2, value2)); // does not block futures.add(cache.putAsync(key3, value3)); // does not block // the remote calls for the 3 puts will effectively be executed // in parallel, particularly useful if running in distributed mode // and the 3 keys would typically be pushed to 3 different nodes // in the cluster // check that the puts completed successfully for (CompletableFuture<?> f: futures) f.get(); 2.3.2.2. Which processes actually happen asynchronously? There are 4 things in Data Grid that can be considered to be on the critical path of a typical write operation. These are, in order of cost: network calls marshalling writing to a cache store (optional) locking Using the async methods will take the network calls and marshalling off the critical path. For various technical reasons, writing to a cache store and acquiring locks, however, still happens in the caller's thread. | [
"<dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> </dependency> </dependencies>",
"// Set up a clustered Cache Manager. GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder(); // Initialize the default Cache Manager. DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); // Create a distributed cache with synchronous replication. ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clustering().cacheMode(CacheMode.DIST_SYNC); // Obtain a volatile cache. Cache<String, String> cache = cacheManager.administration().withFlags(CacheContainerAdmin.AdminFlag.VOLATILE).getOrCreateCache(\"myCache\", builder.build()); // Stop the Cache Manager. cacheManager.stop();",
"Cache<String, String> myCache = manager.getCache(\"myCache\");",
"Cache<String, String> myCache = manager.administration().createCache(\"myCache\", \"myTemplate\");",
"Cache<String, String> myCache = manager.administration().withFlags(AdminFlag.PERMANENT).createCache(\"myCache\", \"myTemplate\");",
"// Id of the person to look up, provided by the application PersonId id = ...; // Get a reference to the cache where person instances will be stored Cache<PersonId, Person> cache = ...; // First, check whether the cache contains the person instance // associated with with the given id Person cachedPerson = cache.get(id); if (cachedPerson == null) { // The person is not cached yet, so query the data store with the id Person person = dataStore.lookup(id); // Cache the person along with the id so that future requests can // retrieve it from memory rather than going to the data store cache.putForExternalRead(id, person); } else { // The person was found in the cache, so return it to the application return cachedPerson; }",
"AdvancedCache advancedCache = cache.getAdvancedCache();",
"advancedCache.withFlags(Flag.CACHE_MODE_LOCAL, Flag.SKIP_LOCKING) .withFlags(Flag.FORCE_SYNCHRONOUS) .put(\"hello\", \"world\");",
"Set<CompletableFuture<?>> futures = new HashSet<>(); futures.add(cache.putAsync(key1, value1)); // does not block futures.add(cache.putAsync(key2, value2)); // does not block futures.add(cache.putAsync(key3, value3)); // does not block // the remote calls for the 3 puts will effectively be executed // in parallel, particularly useful if running in distributed mode // and the 3 keys would typically be pushed to 3 different nodes // in the cluster // check that the puts completed successfully for (CompletableFuture<?> f: futures) f.get();"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/embedding_data_grid_in_java_applications/creating-embedded-caches |
Chapter 12. Deploying IPv6 on Red Hat Quay on OpenShift Container Platform | Chapter 12. Deploying IPv6 on Red Hat Quay on OpenShift Container Platform Note Currently, deploying IPv6 on the Red Hat Quay on OpenShift Container Platform is not supported on IBM Power and IBM Z. Your Red Hat Quay on OpenShift Container Platform deployment can now be served in locations that only support IPv6, such as Telco and Edge environments. For a list of known limitations, see IPv6 limitations 12.1. Enabling the IPv6 protocol family Use the following procedure to enable IPv6 support on your Red Hat Quay deployment. Prerequisites You have updated Red Hat Quay to at least version 3.8. Your host and container software platform (Docker, Podman) must be configured to support IPv6. Procedure In your deployment's config.yaml file, add the FEATURE_LISTEN_IP_VERSION parameter and set it to IPv6 , for example: # ... FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false # ... Start, or restart, your Red Hat Quay deployment. Check that your deployment is listening to IPv6 by entering the following command: USD curl <quay_endpoint>/health/instance {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200} After enabling IPv6 in your deployment's config.yaml , all Red Hat Quay features can be used as normal, so long as your environment is configured to use IPv6 and is not hindered by the IPv6 and dual-stack limitations . Warning If your environment is configured to IPv4, but the FEATURE_LISTEN_IP_VERSION configuration field is set to IPv6 , Red Hat Quay will fail to deploy. 12.2. IPv6 limitations Currently, attempting to configure your Red Hat Quay deployment with the common Microsoft Azure Blob Storage configuration will not work on IPv6 single stack environments. Because the endpoint of Microsoft Azure Blob Storage does not support IPv6, there is no workaround in place for this issue. For more information, see PROJQUAY-4433 . Currently, attempting to configure your Red Hat Quay deployment with Amazon S3 CloudFront will not work on IPv6 single stack environments. Because the endpoint of Amazon S3 CloudFront does not support IPv6, there is no workaround in place for this issue. For more information, see PROJQUAY-4470 . | [
"FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false",
"curl <quay_endpoint>/health/instance {\"data\":{\"services\":{\"auth\":true,\"database\":true,\"disk_space\":true,\"registry_gunicorn\":true,\"service_key\":true,\"web_gunicorn\":true}},\"status_code\":200}"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_operator_features/operator-ipv6-dual-stack |
Chapter 2. Differences between java and alt-java | Chapter 2. Differences between java and alt-java Similarities exist between alt-java and java binaries, with the exception of the SSB mitigation. Although the SBB mitigation patch exists only for x86-64 architecture, Intel and AMD, the alt-java exists on all architectures. For non-x86 architectures, the alt-java binary is identical to java binary, except alt-java has no patches. Additional resources For more information about similarities between alt-java and java , see RH1750419 in the Red Hat Bugzilla documentation. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_alt-java_with_red_hat_build_of_openjdk/diff-java-and-altjava |
Chapter 8. Authorization Services | Chapter 8. Authorization Services Red Hat Single Sign-On Authorization Services are built on top of well-known standards such as the OAuth2 and User-Managed Access specifications. OAuth2 clients (such as front end applications) can obtain access tokens from the server using the token endpoint and use these same tokens to access resources protected by a resource server (such as back end services). In the same way, Red Hat Single Sign-On Authorization Services provide extensions to OAuth2 to allow access tokens to be issued based on the processing of all policies associated with the resource(s) or scope(s) being requested. This means that resource servers can enforce access to their protected resources based on the permissions granted by the server and held by an access token. In Red Hat Single Sign-On Authorization Services the access token with permissions is called a Requesting Party Token or RPT for short. In addition to the issuance of RPTs, Red Hat Single Sign-On Authorization Services also provides a set of RESTful endpoints that allow resources servers to manage their protected resources, scopes, permissions and policies, helping developers to extend or integrate these capabilities into their applications in order to support fine-grained authorization. 8.1. Discovering Authorization Services Endpoints and Metadata Red Hat Single Sign-On provides a discovery document from which clients can obtain all necessary information to interact with Red Hat Single Sign-On Authorization Services, including endpoint locations and capabilities. The discovery document can be obtained from: curl -X GET \ http://USD{host}:USD{port}/auth/realms/USD{realm}/.well-known/uma2-configuration Where USD{host}:USD{port} is the hostname (or IP address) and port where Red Hat Single Sign-On is running and USD{realm} is the name of a realm in Red Hat Single Sign-On. As a result, you should get a response as follows: { // some claims are expected here // these are the main claims in the discovery document about Authorization Services endpoints location "token_endpoint": "http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token", "token_introspection_endpoint": "http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token/introspect", "resource_registration_endpoint": "http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/resource_set", "permission_endpoint": "http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/permission", "policy_endpoint": "http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy" } Each of these endpoints expose a specific set of capabilities: token_endpoint A OAuth2-compliant Token Endpoint that supports the urn:ietf:params:oauth:grant-type:uma-ticket grant type. Through this endpoint clients can send authorization requests and obtain an RPT with all permissions granted by Red Hat Single Sign-On. token_introspection_endpoint A OAuth2-compliant Token Introspection Endpoint which clients can use to query the server to determine the active state of an RPT and to determine any other information associated with the token, such as the permissions granted by Red Hat Single Sign-On. resource_registration_endpoint A UMA-compliant Resource Registration Endpoint which resource servers can use to manage their protected resources and scopes. This endpoint provides operations create, read, update and delete resources and scopes in Red Hat Single Sign-On. permission_endpoint A UMA-compliant Permission Endpoint which resource servers can use to manage permission tickets. This endpoint provides operations create, read, update, and delete permission tickets in Red Hat Single Sign-On. 8.2. Obtaining Permissions To obtain permissions from Red Hat Single Sign-On you send an authorization request to the token endpoint. As a result, Red Hat Single Sign-On will evaluate all policies associated with the resource(s) and scope(s) being requested and issue an RPT with all permissions granted by the server. Clients are allowed to send authorization requests to the token endpoint using the following parameters: grant_type This parameter is required . Must be urn:ietf:params:oauth:grant-type:uma-ticket . ticket This parameter is optional . The most recent permission ticket received by the client as part of the UMA authorization process. claim_token This parameter is optional . A string representing additional claims that should be considered by the server when evaluating permissions for the resource(s) and scope(s) being requested. This parameter allows clients to push claims to Red Hat Single Sign-On. For more details about all supported token formats see claim_token_format parameter. claim_token_format This parameter is optional . A string indicating the format of the token specified in the claim_token parameter. Red Hat Single Sign-On supports two token formats: urn:ietf:params:oauth:token-type:jwt and https://openid.net/specs/openid-connect-core-1_0.html#IDToken . The urn:ietf:params:oauth:token-type:jwt format indicates that the claim_token parameter references an access token. The https://openid.net/specs/openid-connect-core-1_0.html#IDToken indicates that the claim_token parameter references an OpenID Connect ID Token. rpt This parameter is optional . A previously issued RPT which permissions should also be evaluated and added in a new one. This parameter allows clients in possession of an RPT to perform incremental authorization where permissions are added on demand. permission This parameter is optional . A string representing a set of one or more resources and scopes the client is seeking access. This parameter can be defined multiple times in order to request permission for multiple resource and scopes. This parameter is an extension to urn:ietf:params:oauth:grant-type:uma-ticket grant type in order to allow clients to send authorization requests without a permission ticket. The format of the string must be: RESOURCE_ID#SCOPE_ID . For instance: Resource A#Scope A , Resource A#Scope A, Scope B, Scope C , Resource A , #Scope A . audience This parameter is optional . The client identifier of the resource server to which the client is seeking access. This parameter is mandatory in case the permission parameter is defined. It serves as a hint to Red Hat Single Sign-On to indicate the context in which permissions should be evaluated. response_include_resource_name This parameter is optional . A boolean value indicating to the server whether resource names should be included in the RPT's permissions. If false, only the resource identifier is included. response_permissions_limit This parameter is optional . An integer N that defines a limit for the amount of permissions an RPT can have. When used together with rpt parameter, only the last N requested permissions will be kept in the RPT. submit_request This parameter is optional . A boolean value indicating whether the server should create permission requests to the resources and scopes referenced by a permission ticket. This parameter only have effect if used together with the ticket parameter as part of a UMA authorization process. response_mode This parameter is optional . A string value indicating how the server should respond to authorization requests. This parameter is specially useful when you are mainly interested in either the overall decision or the permissions granted by the server, instead of a standard OAuth2 response. Possible values are: decision Indicates that responses from the server should only represent the overall decision by returning a JSON with the following format: { 'result': true } If the authorization request does not map to any permission, a 403 HTTP status code is returned instead. permissions Indicates that responses from the server should contain any permission granted by the server by returning a JSON with the following format: [ { 'rsid': 'My Resource' 'scopes': ['view', 'update'] }, ... ] If the authorization request does not map to any permission, a 403 HTTP status code is returned instead. Example of a authorization request when a client is seeking access to two resources protected by a resource server. curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "audience={resource_server_client_id}" \ --data "permission=Resource A#Scope A" \ --data "permission=Resource B#Scope B" Example of a authorization request when a client is seeking access to any resource and scope protected by a resource server. curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "audience={resource_server_client_id}" Example of an authorization request when a client is seeking access to a UMA protected resource after receiving a permission ticket from the resource server as part of the authorization process: curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "ticket=USD{permission_ticket} If Red Hat Single Sign-On assessment process results in issuance of permissions, it issues the RPT with which it has associated the permissions: Red Hat Single Sign-On responds to the client with the RPT HTTP/1.1 200 OK Content-Type: application/json ... { "access_token": "USD{rpt}", } The response from the server is just like any other response from the token endpoint when using some other grant type. The RPT can be obtained from the access_token response parameter. If the client is not authorized, Red Hat Single Sign-On responds with a 403 HTTP status code: Red Hat Single Sign-On denies the authorization request HTTP/1.1 403 Forbidden Content-Type: application/json ... { "error": "access_denied", "error_description": "request_denied" } 8.2.1. Client Authentication Methods Clients need to authenticate to the token endpoint in order to obtain an RPT. When using the urn:ietf:params:oauth:grant-type:uma-ticket grant type, clients can use any of these authentication methods: Bearer Token Clients should send an access token as a Bearer credential in an HTTP Authorization header to the token endpoint. Example: an authorization request using an access token to authenticate to the token endpoint curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" This method is especially useful when the client is acting on behalf of a user. In this case, the bearer token is an access token previously issued by Red Hat Single Sign-On to some client acting on behalf of a user (or on behalf of itself). Permissions will be evaluated considering the access context represented by the access token. For instance, if the access token was issued to Client A acting on behalf of User A, permissions will be granted depending on the resources and scopes to which User A has access. Client Credentials Client can use any of the client authentication methods supported by Red Hat Single Sign-On. For instance, client_id/client_secret or JWT. Example: an authorization request using client id and client secret to authenticate to the token endpoint curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Basic cGhvdGg6L7Jl13RmfWgtkk==pOnNlY3JldA==" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" 8.2.2. Pushing Claims When obtaining permissions from the server you can push arbitrary claims in order to have these claims available to your policies when evaluating permissions. If you are obtaining permissions from the server without using a permission ticket (UMA flow), you can send an authorization request to the token endpoint as follows: curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "claim_token=ewogICAib3JnYW5pemF0aW9uIjogWyJhY21lIl0KfQ==" \ --data "claim_token_format=urn:ietf:params:oauth:token-type:jwt" \ --data "client_id={resource_server_client_id}" \ --data "client_secret={resource_server_client_secret}" \ --data "audience={resource_server_client_id}" The claim_token parameter expects a BASE64 encoded JSON with a format similar to the example below: { "organization" : ["acme"] } The format expects one or more claims where the value for each claim must be an array of strings. 8.2.2.1. Pushing Claims Using UMA For more details about how to push claims when using UMA and permission tickets, please take a look at Permission API 8.3. User-Managed Access Red Hat Single Sign-On Authorization Services is based on User-Managed Access or UMA for short. UMA is a specification that enhances OAuth2 capabilities in the following ways: Privacy Nowadays, user privacy is becoming a huge concern, as more and more data and devices are available and connected to the cloud. With UMA and Red Hat Single Sign-On, resource servers can enhance their capabilities in order to improve how their resources are protected in respect to user privacy where permissions are granted based on policies defined by the user. Party-to-Party Authorization Resource owners (e.g.: regular end-users) can manage access to their resources and authorize other parties (e.g: regular end-users) to access these resources. This is different than OAuth2 where consent is given to a client application acting on behalf of a user, with UMA resource owners are allowed to consent access to other users, in a completely asynchronous manner. Resource Sharing Resource owners are allowed to manage permissions to their resources and decide who can access a particular resource and how. Red Hat Single Sign-On can then act as a sharing management service from which resource owners can manage their resources. Red Hat Single Sign-On is a UMA 2.0 compliant authorization server that provides most UMA capabilities. As an example, consider a user Alice (resource owner) using an Internet Banking Service (resource server) to manage his Bank Account (resource). One day, Alice decides to open her bank account to Bob (requesting party), a accounting professional. However, Bob should only have access to view (scope) Alice's account. As a resource server, the Internet Banking Service must be able to protect Alice's Bank Account. For that, it relies on Red Hat Single Sign-On Resource Registration Endpoint to create a resource in the server representing Alice's Bank Account. At this moment, if Bob tries to access Alice's Bank Account, access will be denied. The Internet Banking Service defines a few default policies for banking accounts. One of them is that only the owner, in this case Alice, is allowed to access her bank account. However, Internet Banking Service in respect to Alice's privacy also allows her to change specific policies for the banking account. One of these policies that she can change is to define which people are allowed to view her bank account. For that, Internet Banking Service relies on Red Hat Single Sign-On to provide to Alice a space where she can select individuals and the operations (or data) they are allowed to access. At any time, Alice can revoke access or grant additional permissions to Bob. 8.3.1. Authorization Process In UMA, the authorization process starts when a client tries to access a UMA protected resource server. A UMA protected resource server expects a bearer token in the request where the token is an RPT. When a client requests a resource at the resource server without a permission ticket: Client requests a protected resource without sending an RPT curl -X GET \ http://USD{host}:USD{port}/my-resource-server/resource/1bfdfe78-a4e1-4c2d-b142-fc92b75b986f The resource server sends a response back to the client with a permission ticket and a as_uri parameter with the location of a Red Hat Single Sign-On server to where the ticket should be sent in order to obtain an RPT. Resource server responds with a permission ticket HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm="USD{realm}", as_uri="https://USD{host}:USD{port}/auth/realms/USD{realm}", ticket="016f84e8-f9b9-11e0-bd6f-0021cc6004de" The permission ticket is a special type of token issued by Red Hat Single Sign-On Permission API. They represent the permissions being requested (e.g.: resources and scopes) as well any other information associated with the request. Only resource servers are allowed to create those tokens. Now that the client has a permission ticket and also the location of a Red Hat Single Sign-On server, the client can use the discovery document to obtain the location of the token endpoint and send an authorization request. Client sends an authorization request to the token endpoint to obtain an RPT curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "ticket=USD{permission_ticket} If Red Hat Single Sign-On assessment process results in issuance of permissions, it issues the RPT with which it has associated the permissions: Red Hat Single Sign-On responds to the client with the RPT HTTP/1.1 200 OK Content-Type: application/json ... { "access_token": "USD{rpt}", } The response from the server is just like any other response from the token endpoint when using some other grant type. The RPT can be obtained from the access_token response parameter. In case the client is not authorized to have permissions Red Hat Single Sign-On responds with a 403 HTTP status code: Red Hat Single Sign-On denies the authorization request HTTP/1.1 403 Forbidden Content-Type: application/json ... { "error": "access_denied", "error_description": "request_denied" } 8.3.2. Submitting Permission Requests As part of the authorization process, clients need first to obtain a permission ticket from a UMA protected resource server in order to exchange it with an RPT at the Red Hat Single Sign-On Token Endpoint. By default, Red Hat Single Sign-On responds with a 403 HTTP status code and a request_denied error in case the client can not be issued with an RPT. Red Hat Single Sign-On denies the authorization request HTTP/1.1 403 Forbidden Content-Type: application/json ... { "error": "access_denied", "error_description": "request_denied" } Such response implies that Red Hat Single Sign-On could not issue an RPT with the permissions represented by a permission ticket. In some situations, client applications may want to start an asynchronous authorization flow and let the owner of the resources being requested decide whether or not access should be granted. For that, clients can use the submit_request request parameter along with an authorization request to the token endpoint: curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "ticket=USD{permission_ticket} \ --data "submit_request=true" When using the submit_request parameter, Red Hat Single Sign-On will persist a permission request for each resource to which access was denied. Once created, resource owners can check their account and manage their permissions requests. You can think about this functionality as a Request Access button in your application, where users can ask other users for access to their resources. 8.3.3. Managing Access to Users Resources Users can manage access to their resources using the Red Hat Single Sign-On User Account Service. To enable this functionality, you must first enable User-Managed Access for your realm. To do so, open the realm settings page in Red Hat Single Sign-On Administration Console and enable the User-Managed Access switch. On the left side menu, the My Resources option leads to a page where users are able to: Manage Permission Requests that Need my approval This section contains a list of all permission requests awaiting approval. These requests are connected to the parties (users) requesting access to a particular resource. Users are allowed to approve or deny these requests. Manage My resources This section contains a list of all resources owned by the user. Users can click on a resource for more details and share the resource with others. Manage Resources shared with me This section contains a list of all resources shared with the user. Manage Your requests waiting approval This section contains a list of permission requests sent by the user that are waiting for the approval of another user or resource owner. When the user choose to detail own of his resources by clicking on any resource in the "My resources" listing, he is redirected to a page as follows: From this page the users are able to: Manage People with access to this resource This section contains a list of people with access to this resource. Users are allowed to revoke access by clicking on the Revoke button or by removing a specific Permission . Share the resource with others By typing the username or e-mail of another user, the user is able to share the resource and select the permissions he wants to grant access. 8.4. Protection API The Protection API provides a UMA-compliant set of endpoints providing: Resource Management With this endpoint, resource servers can manage their resources remotely and enable policy enforcers to query the server for the resources that need protection. Permission Management In the UMA protocol, resource servers access this endpoint to create permission tickets. Red Hat Single Sign-On also provides endpoints to manage the state of permissions and query permissions. Policy API Red Hat Single Sign-On leverages the UMA Protection API to allow resource servers to manage permissions for their users. In addition to the Resource and Permission APIs, Red Hat Single Sign-On provides a Policy API from where permissions can be set to resources by resource servers on behalf of their users. An important requirement for this API is that only resource servers are allowed to access its endpoints using a special OAuth2 access token called a protection API token (PAT). In UMA, a PAT is a token with the scope uma_protection . 8.4.1. What is a PAT and How to Obtain It A protection API token (PAT) is a special OAuth2 access token with a scope defined as uma_protection . When you create a resource server, Red Hat Single Sign-On automatically creates a role, uma_protection , for the corresponding client application and associates it with the client's service account. Service Account granted with uma_protection role Resource servers can obtain a PAT from Red Hat Single Sign-On like any other OAuth2 access token. For example, using curl: curl -X POST \ -H "Content-Type: application/x-www-form-urlencoded" \ -d 'grant_type=client_credentials&client_id=USD{client_id}&client_secret=USD{client_secret}' \ "http://localhost:8080/auth/realms/USD{realm_name}/protocol/openid-connect/token" The example above is using the client_credentials grant type to obtain a PAT from the server. As a result, the server returns a response similar to the following: { "access_token": USD{PAT}, "expires_in": 300, "refresh_expires_in": 1800, "refresh_token": USD{refresh_token}, "token_type": "bearer", "id_token": USD{id_token}, "not-before-policy": 0, "session_state": "ccea4a55-9aec-4024-b11c-44f6f168439e" } Note Red Hat Single Sign-On can authenticate your client application in different ways. For simplicity, the client_credentials grant type is used here, which requires a client_id and a client_secret . You can choose to use any supported authentication method. 8.4.2. Managing Resources Resource servers can manage their resources remotely using a UMA-compliant endpoint. This endpoint provides operations outlined as follows (entire path omitted for clarity): Create resource set description: POST /resource_set Read resource set description: GET /resource_set/{_id} Update resource set description: PUT /resource_set/{_id} Delete resource set description: DELETE /resource_set/{_id} List resource set descriptions: GET /resource_set For more information about the contract for each of these operations, see UMA Resource Registration API . 8.4.2.1. Creating a Resource To create a resource you must send an HTTP POST request as follows: curl -v -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '{ "name":"Tweedl Social Service", "type":"http://www.example.com/rsrcs/socialstream/140-compatible", "icon_uri":"http://www.example.com/icons/sharesocial.png", "resource_scopes":[ "read-public", "post-updates", "read-private", "http://www.example.com/scopes/all" ] }' By default, the owner of a resource is the resource server. If you want to define a different owner, such as an specific user, you can send a request as follows: curl -v -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '{ "name":"Alice Resource", "owner": "alice" }' Where the property owner can be set with the username or the identifier of the user. 8.4.2.2. Creating User-Managed Resources By default, resources created via Protection API can not be managed by resource owners through the User Account Service . To create resources and allow resource owners to manage these resources, you must set ownerManagedAccess property as follows: curl -v -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '{ "name":"Alice Resource", "owner": "alice", "ownerManagedAccess": true }' 8.4.2.3. Updating Resources To update an existing resource, send an HTTP PUT request as follows: curl -v -X PUT \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '{ "_id": "Alice Resource", "name":"Alice Resource", "resource_scopes": [ "read" ] }' 8.4.2.4. Deleting Resources To delete an existing resource, send an HTTP DELETE request as follows: curl -v -X DELETE \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} \ -H 'Authorization: Bearer 'USDpat 8.4.2.5. Querying Resources To query the resources by id , send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} To query resources given a name , send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?name=Alice Resource To query resources given an uri , send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?uri=/api/alice To query resources given an owner , send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?owner=alice To query resources given an type , send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?type=albums To query resources given an scope , send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?scope=read When querying the server for permissions use parameters first and max results to limit the result. 8.4.3. Managing Permission Requests Resource servers using the UMA protocol can use a specific endpoint to manage permission requests. This endpoint provides a UMA-compliant flow for registering permission requests and obtaining a permission ticket. A permission ticket is a special security token type representing a permission request. Per the UMA specification, a permission ticket is: A correlation handle that is conveyed from an authorization server to a resource server, from a resource server to a client, and ultimately from a client back to an authorization server, to enable the authorization server to assess the correct policies to apply to a request for authorization data. In most cases, you won't need to deal with this endpoint directly. Red Hat Single Sign-On provides a policy enforcer that enables UMA for your resource server so it can obtain a permission ticket from the authorization server, return this ticket to client application, and enforce authorization decisions based on a final requesting party token (RPT). The process of obtaining permission tickets from Red Hat Single Sign-On is performed by resource servers and not regular client applications, where permission tickets are obtained when a client tries to access a protected resource without the necessary grants to access the resource. The issuance of permission tickets is an important aspects when using UMA as it allows resource servers to: Abstract from clients the data associated with the resources protected by the resource server Register in the Red Hat Single Sign-On authorization requests which in turn can be used later in workflows to grant access based on the resource's owner consent Decouple resource servers from authorization servers and allow them to protect and manage their resources using different authorization servers Client wise, a permission ticket has also important aspects that its worthy to highlight: Clients don't need to know about how authorization data is associated with protected resources. A permission ticket is completely opaque to clients. Clients can have access to resources on different resource servers and protected by different authorization servers These are just some of the benefits brought by UMA where other aspects of UMA are strongly based on permission tickets, specially regarding privacy and user controlled access to their resources. 8.4.3.1. Creating Permission Ticket To create a permission ticket, send an HTTP POST request as follows: curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '[ { "resource_id": "{resource_id}", "resource_scopes": [ "view" ] } ]' When creating tickets you can also push arbitrary claims and associate these claims with the ticket: curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '[ { "resource_id": "{resource_id}", "resource_scopes": [ "view" ], "claims": { "organization": ["acme"] } } ]' Where these claims will be available to your policies when evaluating permissions for the resource and scope(s) associated with the permission ticket. 8.4.3.2. Other non UMA-compliant endpoints 8.4.3.2.1. Creating permission ticket To grant permissions for a specific resource with id {resource_id} to a user with id {user_id}, as an owner of the resource send an HTTP POST request as follows: curl -X POST \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission/ticket \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Content-Type: application/json' \ -d '{ "resource": "{resource_id}", "requester": "{user_id}", "granted": true, "scopeName": "view" }' 8.4.3.2.2. Getting permission tickets curl http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission/ticket \ -H 'Authorization: Bearer 'USDaccess_token You can use any of these query parameters: scopeId resourceId owner requester granted returnNames first max 8.4.3.2.3. Updating permission ticket curl -X PUT \ http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission/ticket \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Content-Type: application/json' \ -d '{ "id": "{ticket_id}" "resource": "{resource_id}", "requester": "{user_id}", "granted": false, "scopeName": "view" }' 8.4.3.2.4. Deleting permission ticket curl -X DELETE http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission/ticket/{ticket_id} \ -H 'Authorization: Bearer 'USDaccess_token 8.4.4. Managing Resource Permissions using the Policy API Red Hat Single Sign-On leverages the UMA Protection API to allow resource servers to manage permissions for their users. In addition to the Resource and Permission APIs, Red Hat Single Sign-On provides a Policy API from where permissions can be set to resources by resource servers on behalf of their users. The Policy API is available at: This API is protected by a bearer token that must represent a consent granted by the user to the resource server to manage permissions on his behalf. The bearer token can be a regular access token obtained from the token endpoint using: Resource Owner Password Credentials Grant Type Token Exchange, in order to exchange an access token granted to some client (public client) for a token where audience is the resource server 8.4.4.1. Associating a Permission with a Resource To associate a permission with a specific resource you must send a HTTP POST request as follows: curl -X POST \ http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{resource_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/json' \ -d '{ "name": "Any people manager", "description": "Allow access to any people manager", "scopes": ["read"], "roles": ["people-manager"] }' In the example above we are creating and associating a new permission to a resource represented by resource_id where any user with a role people-manager should be granted with the read scope. You can also create policies using other access control mechanisms, such as using groups: curl -X POST \ http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{resource_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/json' \ -d '{ "name": "Any people manager", "description": "Allow access to any people manager", "scopes": ["read"], "groups": ["/Managers/People Managers"] }' Or a specific client: curl -X POST \ http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{resource_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/json' \ -d '{ "name": "Any people manager", "description": "Allow access to any people manager", "scopes": ["read"], "clients": ["my-client"] }' Or even using a custom policy using JavaScript: Note Upload Scripts is Deprecated and will be removed in future releases. This feature is disabled by default. To enable start the server with -Dkeycloak.profile.feature.upload_scripts=enabled . For more details see Profiles . curl -X POST \ http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{resource_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/json' \ -d '{ "name": "Any people manager", "description": "Allow access to any people manager", "scopes": ["read"], "condition": "if (isPeopleManager()) {USDevaluation.grant()}" }' It is also possible to set any combination of these access control mechanisms. To update an existing permission, send an HTTP PUT request as follows: curl -X PUT \ http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{permission_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Content-Type: application/json' \ -d '{ "id": "21eb3fed-02d7-4b5a-9102-29f3f09b6de2", "name": "Any people manager", "description": "Allow access to any people manager", "type": "uma", "scopes": [ "album:view" ], "logic": "POSITIVE", "decisionStrategy": "UNANIMOUS", "owner": "7e22131a-aa57-4f5f-b1db-6e82babcd322", "roles": [ "user" ] }' 8.4.4.2. Removing a Permission To remove a permission associated with a resource, send an HTTP DELETE request as follows: curl -X DELETE \ http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{permission_id} \ -H 'Authorization: Bearer 'USDaccess_token 8.4.4.3. Querying Permission To query the permissions associated with a resource, send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy?resource={resource_id} To query the permissions given its name, send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy?name=Any people manager To query the permissions associated with a specific scope, send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy?scope=read To query all permissions, send an HTTP GET request as follows: http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy When querying the server for permissions use parameters first and max results to limit the result. 8.5. Requesting Party Token A requesting party token (RPT) is a JSON web token (JWT) digitally signed using JSON web signature (JWS) . The token is built based on the OAuth2 access token previously issued by Red Hat Single Sign-On to a specific client acting on behalf of a user or on its own behalf. When you decode an RPT, you see a payload similar to the following: { "authorization": { "permissions": [ { "resource_set_id": "d2fe9843-6462-4bfc-baba-b5787bb6e0e7", "resource_set_name": "Hello World Resource" } ] }, "jti": "d6109a09-78fd-4998-bf89-95730dfd0892-1464906679405", "exp": 1464906971, "nbf": 0, "iat": 1464906671, "sub": "f1888f4d-5172-4359-be0c-af338505d86c", "typ": "kc_ett", "azp": "hello-world-authz-service" } From this token you can obtain all permissions granted by the server from the permissions claim. Also note that permissions are directly related with the resources/scopes you are protecting and completely decoupled from the access control methods that were used to actually grant and issue these same permissions. 8.5.1. Introspecting a Requesting Party Token Sometimes you might want to introspect a requesting party token (RPT) to check its validity or obtain the permissions within the token to enforce authorization decisions on the resource server side. There are two main use cases where token introspection can help you: When client applications need to query the token validity to obtain a new one with the same or additional permissions When enforcing authorization decisions at the resource server side, especially when none of the built-in policy enforcers fits your application 8.5.2. Obtaining Information about an RPT The token introspection is essentially a OAuth2 token introspection -compliant endpoint from which you can obtain information about an RPT. To introspect an RPT using this endpoint, you can send a request to the server as follows: curl -X POST \ -H "Authorization: Basic aGVsbG8td29ybGQtYXV0aHotc2VydmljZTpzZWNyZXQ=" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d 'token_type_hint=requesting_party_token&token=USD{RPT}' \ "http://localhost:8080/auth/realms/hello-world-authz/protocol/openid-connect/token/introspect" Note The request above is using HTTP BASIC and passing the client's credentials (client ID and secret) to authenticate the client attempting to introspect the token, but you can use any other client authentication method supported by Red Hat Single Sign-On. The introspection endpoint expects two parameters: token_type_hint Use requesting_party_token as the value for this parameter, which indicates that you want to introspect an RPT. token Use the token string as it was returned by the server during the authorization process as the value for this parameter. As a result, the server response is: { "permissions": [ { "resource_id": "90ccc6fc-b296-4cd1-881e-089e1ee15957", "resource_name": "Hello World Resource" } ], "exp": 1465314139, "nbf": 0, "iat": 1465313839, "aud": "hello-world-authz-service", "active": true } If the RPT is not active, this response is returned instead: { "active": false } 8.5.3. Do I Need to Invoke the Server Every Time I Want to Introspect an RPT? No. Just like a regular access token issued by a Red Hat Single Sign-On server, RPTs also use the JSON web token (JWT) specification as the default format. If you want to validate these tokens without a call to the remote introspection endpoint, you can decode the RPT and query for its validity locally. Once you decode the token, you can also use the permissions within the token to enforce authorization decisions. This is essentially what the policy enforcers do. Be sure to: Validate the signature of the RPT (based on the realm's public key) Query for token validity based on its exp , iat , and aud claims 8.6. Authorization Client Java API Depending on your requirements, a resource server should be able to manage resources remotely or even check for permissions programmatically. If you are using Java, you can access the Red Hat Single Sign-On Authorization Services using the Authorization Client API. It is targeted for resource servers that want to access the different endpoints provided by the server such as the Token Endpoint, Resource, and Permission management endpoints. 8.6.1. Maven Dependency <dependencies> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-authz-client</artifactId> <version>USD{KEYCLOAK_VERSION}</version> </dependency> </dependencies> 8.6.2. Configuration The client configuration is defined in a keycloak.json file as follows: { "realm": "hello-world-authz", "auth-server-url" : "http://localhost:8080/auth", "resource" : "hello-world-authz-service", "credentials": { "secret": "secret" } } realm (required) The name of the realm. auth-server-url (required) The base URL of the Red Hat Single Sign-On server. All other Red Hat Single Sign-On pages and REST service endpoints are derived from this. It is usually in the form https://host:port/auth . resource (required) The client-id of the application. Each application has a client-id that is used to identify the application. credentials (required) Specifies the credentials of the application. This is an object notation where the key is the credential type and the value is the value of the credential type. The configuration file is usually located in your application's classpath, the default location from where the client is going to try to find a keycloak.json file. 8.6.3. Creating the Authorization Client Considering you have a keycloak.json file in your classpath, you can create a new AuthzClient instance as follows: // create a new instance based on the configuration defined in a keycloak.json located in your classpath AuthzClient authzClient = AuthzClient.create(); 8.6.4. Obtaining User Entitlements Here is an example illustrating how to obtain user entitlements: // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // send the entitlement request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(request); String rpt = response.getToken(); System.out.println("You got an RPT: " + rpt); // now you can use the RPT to access protected resources on the resource server Here is an example illustrating how to obtain user entitlements for a set of one or more resources: // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // add permissions to the request based on the resources and scopes you want to check access request.addPermission("Default Resource"); // send the entitlement request to the server in order to // obtain an RPT with permissions for a single resource AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(request); String rpt = response.getToken(); System.out.println("You got an RPT: " + rpt); // now you can use the RPT to access protected resources on the resource server 8.6.5. Creating a Resource Using the Protection API // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create a new resource representation with the information we want ResourceRepresentation newResource = new ResourceRepresentation(); newResource.setName("New Resource"); newResource.setType("urn:hello-world-authz:resources:example"); newResource.addScope(new ScopeRepresentation("urn:hello-world-authz:scopes:view")); ProtectedResource resourceClient = authzClient.protection().resource(); ResourceRepresentation existingResource = resourceClient.findByName(newResource.getName()); if (existingResource != null) { resourceClient.delete(existingResource.getId()); } // create the resource on the server ResourceRepresentation response = resourceClient.create(newResource); String resourceId = response.getId(); // query the resource using its newly generated id ResourceRepresentation resource = resourceClient.findById(resourceId); System.out.println(resource); 8.6.6. Introspecting an RPT // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // send the authorization request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(); String rpt = response.getToken(); // introspect the token TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt); System.out.println("Token status is: " + requestingPartyToken.getActive()); System.out.println("Permissions granted by the server: "); for (Permission granted : requestingPartyToken.getPermissions()) { System.out.println(granted); } | [
"curl -X GET http://USD{host}:USD{port}/auth/realms/USD{realm}/.well-known/uma2-configuration",
"{ // some claims are expected here // these are the main claims in the discovery document about Authorization Services endpoints location \"token_endpoint\": \"http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token\", \"token_introspection_endpoint\": \"http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token/introspect\", \"resource_registration_endpoint\": \"http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/resource_set\", \"permission_endpoint\": \"http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/permission\", \"policy_endpoint\": \"http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy\" }",
"{ 'result': true }",
"[ { 'rsid': 'My Resource' 'scopes': ['view', 'update'] }, ]",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"audience={resource_server_client_id}\" --data \"permission=Resource A#Scope A\" --data \"permission=Resource B#Scope B\"",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"audience={resource_server_client_id}\"",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"ticket=USD{permission_ticket}",
"HTTP/1.1 200 OK Content-Type: application/json { \"access_token\": \"USD{rpt}\", }",
"HTTP/1.1 403 Forbidden Content-Type: application/json { \"error\": \"access_denied\", \"error_description\": \"request_denied\" }",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\"",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Basic cGhvdGg6L7Jl13RmfWgtkk==pOnNlY3JldA==\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\"",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"claim_token=ewogICAib3JnYW5pemF0aW9uIjogWyJhY21lIl0KfQ==\" --data \"claim_token_format=urn:ietf:params:oauth:token-type:jwt\" --data \"client_id={resource_server_client_id}\" --data \"client_secret={resource_server_client_secret}\" --data \"audience={resource_server_client_id}\"",
"{ \"organization\" : [\"acme\"] }",
"curl -X GET http://USD{host}:USD{port}/my-resource-server/resource/1bfdfe78-a4e1-4c2d-b142-fc92b75b986f",
"HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm=\"USD{realm}\", as_uri=\"https://USD{host}:USD{port}/auth/realms/USD{realm}\", ticket=\"016f84e8-f9b9-11e0-bd6f-0021cc6004de\"",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"ticket=USD{permission_ticket}",
"HTTP/1.1 200 OK Content-Type: application/json { \"access_token\": \"USD{rpt}\", }",
"HTTP/1.1 403 Forbidden Content-Type: application/json { \"error\": \"access_denied\", \"error_description\": \"request_denied\" }",
"HTTP/1.1 403 Forbidden Content-Type: application/json { \"error\": \"access_denied\", \"error_description\": \"request_denied\" }",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"ticket=USD{permission_ticket} --data \"submit_request=true\"",
"curl -X POST -H \"Content-Type: application/x-www-form-urlencoded\" -d 'grant_type=client_credentials&client_id=USD{client_id}&client_secret=USD{client_secret}' \"http://localhost:8080/auth/realms/USD{realm_name}/protocol/openid-connect/token\"",
"{ \"access_token\": USD{PAT}, \"expires_in\": 300, \"refresh_expires_in\": 1800, \"refresh_token\": USD{refresh_token}, \"token_type\": \"bearer\", \"id_token\": USD{id_token}, \"not-before-policy\": 0, \"session_state\": \"ccea4a55-9aec-4024-b11c-44f6f168439e\" }",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set",
"curl -v -X POST http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '{ \"name\":\"Tweedl Social Service\", \"type\":\"http://www.example.com/rsrcs/socialstream/140-compatible\", \"icon_uri\":\"http://www.example.com/icons/sharesocial.png\", \"resource_scopes\":[ \"read-public\", \"post-updates\", \"read-private\", \"http://www.example.com/scopes/all\" ] }'",
"curl -v -X POST http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '{ \"name\":\"Alice Resource\", \"owner\": \"alice\" }'",
"curl -v -X POST http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '{ \"name\":\"Alice Resource\", \"owner\": \"alice\", \"ownerManagedAccess\": true }'",
"curl -v -X PUT http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '{ \"_id\": \"Alice Resource\", \"name\":\"Alice Resource\", \"resource_scopes\": [ \"read\" ] }'",
"curl -v -X DELETE http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} -H 'Authorization: Bearer 'USDpat",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set/{resource_id}",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?name=Alice Resource",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?uri=/api/alice",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?owner=alice",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?type=albums",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/resource_set?scope=read",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '[ { \"resource_id\": \"{resource_id}\", \"resource_scopes\": [ \"view\" ] } ]'",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '[ { \"resource_id\": \"{resource_id}\", \"resource_scopes\": [ \"view\" ], \"claims\": { \"organization\": [\"acme\"] } } ]'",
"curl -X POST http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission/ticket -H 'Authorization: Bearer 'USDaccess_token -H 'Content-Type: application/json' -d '{ \"resource\": \"{resource_id}\", \"requester\": \"{user_id}\", \"granted\": true, \"scopeName\": \"view\" }'",
"curl http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission/ticket -H 'Authorization: Bearer 'USDaccess_token",
"curl -X PUT http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission/ticket -H 'Authorization: Bearer 'USDaccess_token -H 'Content-Type: application/json' -d '{ \"id\": \"{ticket_id}\" \"resource\": \"{resource_id}\", \"requester\": \"{user_id}\", \"granted\": false, \"scopeName\": \"view\" }'",
"curl -X DELETE http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/permission/ticket/{ticket_id} -H 'Authorization: Bearer 'USDaccess_token",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/authz/protection/uma-policy/{resource_id}",
"curl -X POST http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{resource_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{ \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"scopes\": [\"read\"], \"roles\": [\"people-manager\"] }'",
"curl -X POST http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{resource_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{ \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"scopes\": [\"read\"], \"groups\": [\"/Managers/People Managers\"] }'",
"curl -X POST http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{resource_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{ \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"scopes\": [\"read\"], \"clients\": [\"my-client\"] }'",
"curl -X POST http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{resource_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{ \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"scopes\": [\"read\"], \"condition\": \"if (isPeopleManager()) {USDevaluation.grant()}\" }'",
"curl -X PUT http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{permission_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Content-Type: application/json' -d '{ \"id\": \"21eb3fed-02d7-4b5a-9102-29f3f09b6de2\", \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"type\": \"uma\", \"scopes\": [ \"album:view\" ], \"logic\": \"POSITIVE\", \"decisionStrategy\": \"UNANIMOUS\", \"owner\": \"7e22131a-aa57-4f5f-b1db-6e82babcd322\", \"roles\": [ \"user\" ] }'",
"curl -X DELETE http://localhost:8180/auth/realms/photoz/authz/protection/uma-policy/{permission_id} -H 'Authorization: Bearer 'USDaccess_token",
"http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy?resource={resource_id}",
"http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy?name=Any people manager",
"http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy?scope=read",
"http://USD{host}:USD{port}/auth/realms/USD{realm}/authz/protection/uma-policy",
"{ \"authorization\": { \"permissions\": [ { \"resource_set_id\": \"d2fe9843-6462-4bfc-baba-b5787bb6e0e7\", \"resource_set_name\": \"Hello World Resource\" } ] }, \"jti\": \"d6109a09-78fd-4998-bf89-95730dfd0892-1464906679405\", \"exp\": 1464906971, \"nbf\": 0, \"iat\": 1464906671, \"sub\": \"f1888f4d-5172-4359-be0c-af338505d86c\", \"typ\": \"kc_ett\", \"azp\": \"hello-world-authz-service\" }",
"http://USD{host}:USD{port}/auth/realms/USD{realm_name}/protocol/openid-connect/token/introspect",
"curl -X POST -H \"Authorization: Basic aGVsbG8td29ybGQtYXV0aHotc2VydmljZTpzZWNyZXQ=\" -H \"Content-Type: application/x-www-form-urlencoded\" -d 'token_type_hint=requesting_party_token&token=USD{RPT}' \"http://localhost:8080/auth/realms/hello-world-authz/protocol/openid-connect/token/introspect\"",
"{ \"permissions\": [ { \"resource_id\": \"90ccc6fc-b296-4cd1-881e-089e1ee15957\", \"resource_name\": \"Hello World Resource\" } ], \"exp\": 1465314139, \"nbf\": 0, \"iat\": 1465313839, \"aud\": \"hello-world-authz-service\", \"active\": true }",
"{ \"active\": false }",
"<dependencies> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-authz-client</artifactId> <version>USD{KEYCLOAK_VERSION}</version> </dependency> </dependencies>",
"{ \"realm\": \"hello-world-authz\", \"auth-server-url\" : \"http://localhost:8080/auth\", \"resource\" : \"hello-world-authz-service\", \"credentials\": { \"secret\": \"secret\" } }",
"// create a new instance based on the configuration defined in a keycloak.json located in your classpath AuthzClient authzClient = AuthzClient.create();",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // send the entitlement request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // add permissions to the request based on the resources and scopes you want to check access request.addPermission(\"Default Resource\"); // send the entitlement request to the server in order to // obtain an RPT with permissions for a single resource AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create a new resource representation with the information we want ResourceRepresentation newResource = new ResourceRepresentation(); newResource.setName(\"New Resource\"); newResource.setType(\"urn:hello-world-authz:resources:example\"); newResource.addScope(new ScopeRepresentation(\"urn:hello-world-authz:scopes:view\")); ProtectedResource resourceClient = authzClient.protection().resource(); ResourceRepresentation existingResource = resourceClient.findByName(newResource.getName()); if (existingResource != null) { resourceClient.delete(existingResource.getId()); } // create the resource on the server ResourceRepresentation response = resourceClient.create(newResource); String resourceId = response.getId(); // query the resource using its newly generated id ResourceRepresentation resource = resourceClient.findById(resourceId); System.out.println(resource);",
"// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // send the authorization request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(); String rpt = response.getToken(); // introspect the token TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt); System.out.println(\"Token status is: \" + requestingPartyToken.getActive()); System.out.println(\"Permissions granted by the server: \"); for (Permission granted : requestingPartyToken.getPermissions()) { System.out.println(granted); }"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/authorization_services_guide/service_overview |
Chapter 11. Contacting Red Hat support for service | Chapter 11. Contacting Red Hat support for service If the information in this guide did not help you to solve the problem, this chapter explains how you contact the Red Hat support service. Prerequisites Red Hat support account. 11.1. Providing information to Red Hat Support engineers If you are unable to fix problems related to Red Hat Ceph Storage, contact the Red Hat Support Service and provide sufficient amount of information that helps the support engineers to faster troubleshoot the problem you encounter. Prerequisites Root-level access to the node. Red Hat support account. Procedure Open a support ticket on the Red Hat Customer Portal . Ideally, attach an sosreport to the ticket. See the What is a sosreport and how to create one in Red Hat Enterprise Linux? solution for details. If the Ceph daemons fail with a segmentation fault, consider generating a human-readable core dump file. See Generating readable core dump files for details. 11.2. Generating readable core dump files When a Ceph daemon terminates unexpectedly with a segmentation fault, gather the information about its failure and provide it to the Red Hat Support Engineers. Such information speeds up the initial investigation. Also, the Support Engineers can compare the information from the core dump files with Red Hat Ceph Storage cluster known issues. Prerequisites Install the debuginfo packages if they are not installed already. Enable the following repositories to install the required debuginfo packages. Example Once the repository is enabled, you can install the debug info packages that you need from this list of supported packages: Ensure that the gdb package is installed and if it is not, install it: Example Section 11.2.1, "Generating readable core dump files in containerized deployments" 11.2.1. Generating readable core dump files in containerized deployments You can generate a core dump file for Red Hat Ceph Storage, which involves two scenarios of capturing the core dump file: When a Ceph process terminates unexpectedly due to the SIGILL, SIGTRAP, SIGABRT, or SIGSEGV error. or Manually, for example for debugging issues such as Ceph processes are consuming high CPU cycles, or are not responding. Prerequisites Root-level access to the container node running the Ceph containers. Installation of the appropriate debugging packages. Installation of the GNU Project Debugger ( gdb ) package. Ensure the hosts has at least 8 GB RAM. If there are multiple daemons on the host, then Red Hat recommends more RAM. Procedure If a Ceph process terminates unexpectedly due to the SIGILL, SIGTRAP, SIGABRT, or SIGSEGV error: Set the core pattern to the systemd-coredump service on the node where the container with the failed Ceph process is running: Example Watch for the container failure due to a Ceph process and search for the core dump file in the /var/lib/systemd/coredump/ directory: Example To manually capture a core dump file for the Ceph Monitors and Ceph OSDs : Get the MONITOR_ID or the OSD_ID and enter the container: Syntax Example Install the procps-ng and gdb packages inside the container: Example Find the process ID: Syntax Replace PROCESS with the name of the running process, for example ceph-mon or ceph-osd . Example Generate the core dump file: Syntax Replace ID with the ID of the process that you got from the step, for example 18110 : Example Verify that the core dump file has been generated correctly. Example Copy the core dump file outside of the Ceph Monitor container: Syntax Replace MONITOR_ID with the ID number of the Ceph Monitor and replace MONITOR_PID with the process ID number. To manually capture a core dump file for other Ceph daemons: Log in to the cephadm shell : Example Enable ptrace for the daemons: Example Redeploy the daemon service: Syntax Example Exit the cephadm shell and log in to the host where the daemons are deployed: Example Get the DAEMON_ID and enter the container: Example Install the procps-ng and gdb packages: Example Get the PID of process: Example Gather core dump: Syntax Example Verify that the core dump file has been generated correctly. Example Copy the core dump file outside the container: Syntax Replace DAEMON_ID with the ID number of the Ceph daemon and replace PID with the process ID number. Upload the core dump file for analysis to a Red Hat support case. See Providing information to Red Hat Support engineers for details. Additional Resources The How to use gdb to generate a readable backtrace from an application core solution on the Red Hat Customer Portal The How to enable core file dumps when an application crashes or segmentation faults solution on the Red Hat Customer Portal | [
"subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms yum --enable=rhceph-6-tools-for-rhel-9-x86_64-debug-rpms",
"ceph-base-debuginfo ceph-common-debuginfo ceph-debugsource ceph-fuse-debuginfo ceph-immutable-object-cache-debuginfo ceph-mds-debuginfo ceph-mgr-debuginfo ceph-mon-debuginfo ceph-osd-debuginfo ceph-radosgw-debuginfo cephfs-mirror-debuginfo",
"dnf install gdb",
"echo \"| /usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e\" > /proc/sys/kernel/core_pattern",
"ls -ltr /var/lib/systemd/coredump total 8232 -rw-r-----. 1 root root 8427548 Jan 22 19:24 core.ceph-osd.167.5ede29340b6c4fe4845147f847514c12.15622.1584573794000000.xz",
"ps exec -it MONITOR_ID_OR_OSD_ID bash",
"podman ps podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-osd-2 bash",
"dnf install procps-ng gdb",
"ps -aef | grep PROCESS | grep -v run",
"ps -aef | grep ceph-mon | grep -v run ceph 15390 15266 0 18:54 ? 00:00:29 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 5 ceph 18110 17985 1 19:40 ? 00:00:08 /usr/bin/ceph-mon --cluster ceph --setroot ceph --setgroup ceph -d -i 2",
"gcore ID",
"gcore 18110 warning: target file /proc/18110/cmdline contained unexpected null characters Saved corefile core.18110",
"ls -ltr total 709772 -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.18110",
"cp ceph-mon- MONITOR_ID :/tmp/mon.core. MONITOR_PID /tmp",
"cephadm shell",
"ceph config set mgr mgr/cephadm/allow_ptrace true",
"ceph orch redeploy SERVICE_ID",
"ceph orch redeploy mgr ceph orch redeploy rgw.rgw.1",
"exit ssh [email protected]",
"podman ps podman exec -it ceph-1ca9f6a8-d036-11ec-8263-fa163ee967ad-rgw-rgw-1-host04 bash",
"dnf install procps-ng gdb",
"ps aux | grep rados ceph 6 0.3 2.8 5334140 109052 ? Sl May10 5:25 /usr/bin/radosgw -n client.rgw.rgw.1.host04 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug",
"gcore PID",
"gcore 6",
"ls -ltr total 108798 -rw-r--r--. 1 root root 726799544 Mar 18 19:46 core.6",
"cp ceph-mon- DAEMON_ID :/tmp/mon.core. PID /tmp"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/contacting-red-hat-support-for-service |
Chapter 9. Streams | Chapter 9. Streams You may want to process a subset or all data in the cache to produce a result. This may bring thoughts of Map Reduce. Data Grid allows the user to do something very similar but utilizes the standard JRE APIs to do so. Java 8 introduced the concept of a Stream which allows functional-style operations on collections rather than having to procedurally iterate over the data yourself. Stream operations can be implemented in a fashion very similar to MapReduce. Streams, just like MapReduce allow you to perform processing upon the entirety of your cache, possibly a very large data set, but in an efficient way. Note Streams are the preferred method when dealing with data that exists in the cache because streams automatically adjust to cluster topology changes. Also since we can control how the entries are iterated upon we can more efficiently perform the operations in a cache that is distributed if you want it to perform all of the operations across the cluster concurrently. A stream is retrieved from the entrySet , keySet or values collections returned from the Cache by invoking the stream or parallelStream methods. 9.1. Common stream operations This section highlights various options that are present irrespective of what type of underlying cache you are using. 9.2. Key filtering It is possible to filter the stream so that it only operates upon a given subset of keys. This can be done by invoking the filterKeys method on the CacheStream . This should always be used over a Predicate filter and will be faster if the predicate was holding all keys. If you are familiar with the AdvancedCache interface you may be wondering why you even use getAll over this keyFilter. There are some small benefits (mostly smaller payloads) to using getAll if you need the entries as is and need them all in memory in the local node. However if you need to do processing on these elements a stream is recommended since you will get both distributed and threaded parallelism for free. 9.3. Segment based filtering Note This is an advanced feature and should only be used with deep knowledge of Data Grid segment and hashing techniques. These segments based filtering can be useful if you need to segment data into separate invocations. This can be useful when integrating with other tools such as Apache Spark . This option is only supported for replicated and distributed caches. This allows the user to operate upon a subset of data at a time as determined by the KeyPartitioner . The segments can be filtered by invoking filterKeySegments method on the CacheStream . This is applied after the key filter but before any intermediate operations are performed. 9.4. Local/Invalidation A stream used with a local or invalidation cache can be used just the same way you would use a stream on a regular collection. Data Grid handles all of the translations if necessary behind the scenes and works with all of the more interesting options (ie. storeAsBinary and a cache loader). Only data local to the node where the stream operation is performed will be used, for example invalidation only uses local entries. 9.5. Example The code below takes a cache and returns a map with all the cache entries whose values contain the string "JBoss" Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains("JBoss")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); 9.6. Distribution/Replication/Scattered This is where streams come into their stride. When a stream operation is performed it will send the various intermediate and terminal operations to each node that has pertinent data. This allows processing the intermediate values on the nodes owning the data, and only sending the final results back to the originating nodes, improving performance. 9.6.1. Rehash Aware Internally the data is segmented and each node only performs the operations upon the data it owns as a primary owner. This allows for data to be processed evenly, assuming segments are granular enough to provide for equal amounts of data on each node. When you are utilizing a distributed cache, the data can be reshuffled between nodes when a new node joins or leaves. Distributed Streams handle this reshuffling of data automatically so you don't have to worry about monitoring when nodes leave or join the cluster. Reshuffled entries may be processed a second time, and we keep track of the processed entries at the key level or at the segment level (depending on the terminal operation) to limit the amount of duplicate processing. It is possible but highly discouraged to disable rehash awareness on the stream. This should only be considered if your request can handle only seeing a subset of data if a rehash occurs. This can be done by invoking CacheStream.disableRehashAware() The performance gain for most operations when a rehash doesn't occur is completely negligible. The only exceptions are for iterator and forEach, which will use less memory, since they do not have to keep track of processed keys. Warning Please rethink disabling rehash awareness unless you really know what you are doing. 9.6.2. Serialization Since the operations are sent across to other nodes they must be serializable by Data Grid marshalling. This allows the operations to be sent to the other nodes. The simplest way is to use a CacheStream instance and use a lambda just as you would normally. Data Grid overrides all of the various Stream intermediate and terminal methods to take Serializable versions of the arguments (ie. SerializableFunction, SerializablePredicate... ) You can find these methods at CacheStream . This relies on the spec to pick the most specific method as defined here . In our example we used a Collector to collect all the results into a Map . Unfortunately the Collectors class doesn't produce Serializable instances. Thus if you need to use these, there are two ways to do so: One option would be to use the CacheCollectors class which allows for a Supplier<Collector> to be provided. This instance could then use the Collectors to supply a Collector which is not serialized. Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains("Jboss")) .collect(CacheCollectors.serializableCollector(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue))); Alternatively, you can avoid the use of CacheCollectors and instead use the overloaded collect methods that take Supplier<Collector> . These overloaded collect methods are only available via CacheStream interface. Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains("Jboss")) .collect(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); If however you are not able to use the Cache and CacheStream interfaces you cannot utilize Serializable arguments and you must instead cast the lambdas to be Serializable manually by casting the lambda to multiple interfaces. It is not a pretty sight but it gets the job done. Map<Object, String> jbossValues = map.entrySet().stream() .filter((Serializable & Predicate<Map.Entry<Object, String>>) e -> e.getValue().contains("Jboss")) .collect(CacheCollectors.serializableCollector(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue))); The recommended and most performant way is to use an AdvancedExternalizer as this provides the smallest payload. Unfortunately this means you cannot use lamdbas as advanced externalizers require defining the class before hand. You can use an advanced externalizer as shown below: Map<Object, String> jbossValues = cache.entrySet().stream() .filter(new ContainsFilter("Jboss")) .collect(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); class ContainsFilter implements Predicate<Map.Entry<Object, String>> { private final String target; ContainsFilter(String target) { this.target = target; } @Override public boolean test(Map.Entry<Object, String> e) { return e.getValue().contains(target); } } class JbossFilterExternalizer implements AdvancedExternalizer<ContainsFilter> { @Override public Set<Class<? extends ContainsFilter>> getTypeClasses() { return Util.asSet(ContainsFilter.class); } @Override public Integer getId() { return CUSTOM_ID; } @Override public void writeObject(ObjectOutput output, ContainsFilter object) throws IOException { output.writeUTF(object.target); } @Override public ContainsFilter readObject(ObjectInput input) throws IOException, ClassNotFoundException { return new ContainsFilter(input.readUTF()); } } You could also use an advanced externalizer for the collector supplier to reduce the payload size even further. Map<Object, String> map = (Map<Object, String>) cache.entrySet().stream() .filter(new ContainsFilter("Jboss")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); class ToMapCollectorSupplier<K, U> implements Supplier<Collector<Map.Entry<K, U>, ?, Map<K, U>>> { static final ToMapCollectorSupplier INSTANCE = new ToMapCollectorSupplier(); private ToMapCollectorSupplier() { } @Override public Collector<Map.Entry<K, U>, ?, Map<K, U>> get() { return Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue); } } class ToMapCollectorSupplierExternalizer implements AdvancedExternalizer<ToMapCollectorSupplier> { @Override public Set<Class<? extends ToMapCollectorSupplier>> getTypeClasses() { return Util.asSet(ToMapCollectorSupplier.class); } @Override public Integer getId() { return CUSTOM_ID; } @Override public void writeObject(ObjectOutput output, ToMapCollectorSupplier object) throws IOException { } @Override public ToMapCollectorSupplier readObject(ObjectInput input) throws IOException, ClassNotFoundException { return ToMapCollectorSupplier.INSTANCE; } } 9.7. Parallel Computation Distributed streams by default try to parallelize as much as possible. It is possible for the end user to control this and actually they always have to control one of the options. There are 2 ways these streams are parallelized. Local to each node When a stream is created from the cache collection the end user can choose between invoking stream or parallelStream method. Depending on if the parallel stream was picked will enable multiple threading for each node locally. Note that some operations like a rehash aware iterator and forEach operations will always use a sequential stream locally. This could be enhanced at some point to allow for parallel streams locally. Users should be careful when using local parallelism as it requires having a large number of entries or operations that are computationally expensive to be faster. Also it should be noted that if a user uses a parallel stream with forEach that the action should not block as this would be executed on the common pool, which is normally reserved for computation operations. Remote requests When there are multiple nodes it may be desirable to control whether the remote requests are all processed at the same time concurrently or one at a time. By default all terminal operations except the iterator perform concurrent requests. The iterator, method to reduce overall memory pressure on the local node, only performs sequential requests which actually performs slightly better. If a user wishes to change this default however they can do so by invoking the sequentialDistribution or parallelDistribution methods on the CacheStream . 9.8. Task timeout It is possible to set a timeout value for the operation requests. This timeout is used only for remote requests timing out and it is on a per request basis. The former means the local execution will not timeout and the latter means if you have a failover scenario as described above the subsequent requests each have a new timeout. If no timeout is specified it uses the replication timeout as a default timeout. You can set the timeout in your task by doing the following: CacheStream<Map.Entry<Object, String>> stream = cache.entrySet().stream(); stream.timeout(1, TimeUnit.MINUTES); For more information about this, please check the java doc in timeout javadoc. 9.9. Injection The Stream has a terminal operation called forEach which allows for running some sort of side effect operation on the data. In this case it may be desirable to get a reference to the Cache that is backing this Stream. If your Consumer implements the CacheAware interface the injectCache method be invoked before the accept method from the Consumer interface. 9.10. Distributed Stream execution Distributed streams execution works in a fashion very similar to map reduce. Except in this case we are sending zero to many intermediate operations (map, filter etc.) and a single terminal operation to the various nodes. The operation basically comes down to the following: The desired segments are grouped by which node is the primary owner of the given segment A request is generated to send to each remote node that contains the intermediate and terminal operations including which segments it should process The terminal operation will be performed locally if necessary Each remote node will receive this request and run the operations and subsequently send the response back The local node will then gather the local response and remote responses together performing any kind of reduction required by the operations themselves. Final reduced response is then returned to the user In most cases all operations are fully distributed, as in the operations are all fully applied on each remote node and usually only the last operation or something related may be reapplied to reduce the results from multiple nodes. One important note is that intermediate values do not actually have to be serializable, it is the last value sent back that is the part desired (exceptions for various operations will be highlighted below). Terminal operator distributed result reductions The following paragraphs describe how the distributed reductions work for the various terminal operators. Some of these are special in that an intermediate value may be required to be serializable instead of the final result. allMatch noneMatch anyMatch The allMatch operation is ran on each node and then all the results are logically anded together locally to get the appropriate value. The noneMatch and anyMatch operations use a logical or instead. These methods also have early termination support, stopping remote and local operations once the final result is known. collect The collect method is interesting in that it can do a few extra steps. The remote node performs everything as normal except it doesn't perform the final finisher upon the result and instead sends back the fully combined results. The local thread then combines the remote and local result into a value which is then finally finished. The key here to remember is that the final value doesn't have to be serializable but rather the values produced from the supplier and combiner methods. count The count method just adds the numbers together from each node. findAny findFirst The findAny operation returns just the first value they find, whether it was from a remote node or locally. Note this supports early termination in that once a value is found it will not process others. Note the findFirst method is special since it requires a sorted intermediate operation, which is detailed in the exceptions section. max min The max and min methods find the respective min or max value on each node then a final reduction is performed locally to ensure only the min or max across all nodes is returned. reduce The various reduce methods 1 , 2 , 3 will end up serializing the result as much as the accumulator can do. Then it will accumulate the local and remote results together locally, before combining if you have provided that. Note this means a value coming from the combiner doesn't have to be Serializable. 9.11. Key based rehash aware operators The iterator , spliterator and forEach are unlike the other terminal operators in that the rehash awareness has to keep track of what keys per segment have been processed instead of just segments. This is to guarantee an exactly once (iterator & spliterator) or at least once behavior (forEach) even under cluster membership changes. The iterator and spliterator operators when invoked on a remote node will return back batches of entries, where the batch is only sent back after the last has been fully consumed. This batching is done to limit how many entries are in memory at a given time. The user node will hold onto which keys it has processed and when a given segment is completed it will release those keys from memory. This is why sequential processing is preferred for the iterator method, so only a subset of segment keys are held in memory at once, instead of from all nodes. The forEach() method also returns batches, but it returns a batch of keys after it has finished processing at least a batch worth of keys. This way the originating node can know what keys have been processed already to reduce chances of processing the same entry again. Unfortunately this means it is possible to have an at least once behavior when a node goes down unexpectedly. In this case that node could have been processing a batch and not yet completed one and those entries that were processed but not in a completed batch will be ran again when the rehash failure operation occurs. Note that adding a node will not cause this issue as the rehash failover doesn't occur until all responses are received. These operations batch sizes are both controlled by the same value which can be configured by invoking distributedBatchSize method on the CacheStream . This value will default to the chunkSize configured in state transfer. Unfortunately this value is a tradeoff with memory usage vs performance vs at least once and your mileage may vary. Using iterator with replicated and distributed caches When a node is the primary or backup owner of all requested segments for a distributed stream, Data Grid performs the iterator or spliterator terminal operations locally, which optimizes performance as remote iterations are more resource intensive. This optimization applies to both replicated and distributed caches. However, Data Grid performs iterations remotely when using cache stores that are both shared and have write-behind enabled. In this case performing the iterations remotely ensures consistency. 9.12. Intermediate operation exceptions There are some intermediate operations that have special exceptions, these are skip , peek , sorted 1 2 . & distinct . All of these methods have some sort of artificial iterator implanted in the stream processing to guarantee correctness, they are documented as below. Note this means these operations may cause possibly severe performance degradation. Skip An artificial iterator is implanted up to the intermediate skip operation. Then results are brought locally so it can skip the appropriate amount of elements. Sorted WARNING: This operation requires having all entries in memory on the local node. An artificial iterator is implanted up to the intermediate sorted operation. All results are sorted locally. There are possible plans to have a distributed sort which returns batches of elements, but this is not yet implemented. Distinct WARNING: This operation requires having all or nearly all entries in memory on the local node. Distinct is performed on each remote node and then an artificial iterator returns those distinct values. Then finally all of those results have a distinct operation performed upon them. The rest of the intermediate operations are fully distributed as one would expect. 9.13. Examples Word Count Word count is a classic, if overused, example of map/reduce paradigm. Assume we have a mapping of key sentence stored on Data Grid nodes. Key is a String, each sentence is also a String, and we have to count occurrence of all words in all sentences available. The implementation of such a distributed task could be defined as follows: public class WordCountExample { /** * In this example replace c1 and c2 with * real Cache references * * @param args */ public static void main(String[] args) { Cache<String, String> c1 = ...; Cache<String, String> c2 = ...; c1.put("1", "Hello world here I am"); c2.put("2", "Infinispan rules the world"); c1.put("3", "JUDCon is in Boston"); c2.put("4", "JBoss World is in Boston as well"); c1.put("12","JBoss Application Server"); c2.put("15", "Hello world"); c1.put("14", "Infinispan community"); c2.put("15", "Hello world"); c1.put("111", "Infinispan open source"); c2.put("112", "Boston is close to Toronto"); c1.put("113", "Toronto is a capital of Ontario"); c2.put("114", "JUDCon is cool"); c1.put("211", "JBoss World is awesome"); c2.put("212", "JBoss rules"); c1.put("213", "JBoss division of RedHat "); c2.put("214", "RedHat community"); Map<String, Long> wordCountMap = c1.entrySet().parallelStream() .map(e -> e.getValue().split("\\s")) .flatMap(Arrays::stream) .collect(() -> Collectors.groupingBy(Function.identity(), Collectors.counting())); } } In this case it is pretty simple to do the word count from the example. However what if we want to find the most frequent word in the example? If you take a second to think about this case you will realize you need to have all words counted and available locally first. Thus we actually have a few options. We could use a finisher on the collector, which is invoked on the user thread after all the results have been collected. Some redundant lines have been removed from the example. public class WordCountExample { public static void main(String[] args) { // Lines removed String mostFrequentWord = c1.entrySet().parallelStream() .map(e -> e.getValue().split("\\s")) .flatMap(Arrays::stream) .collect(() -> Collectors.collectingAndThen( Collectors.groupingBy(Function.identity(), Collectors.counting()), wordCountMap -> { String mostFrequent = null; long maxCount = 0; for (Map.Entry<String, Long> e : wordCountMap.entrySet()) { int count = e.getValue().intValue(); if (count > maxCount) { maxCount = count; mostFrequent = e.getKey(); } } return mostFrequent; })); } Unfortunately the last step is only going to be ran in a single thread, which if we have a lot of words could be quite slow. Maybe there is another way to parallelize this with Streams. We mentioned before we are in the local node after processing, so we could actually use a stream on the map results. We can therefore use a parallel stream on the results. public class WordFrequencyExample { public static void main(String[] args) { // Lines removed Map<String, Long> wordCount = c1.entrySet().parallelStream() .map(e -> e.getValue().split("\\s")) .flatMap(Arrays::stream) .collect(() -> Collectors.groupingBy(Function.identity(), Collectors.counting())); Optional<Map.Entry<String, Long>> mostFrequent = wordCount.entrySet().parallelStream().reduce( (e1, e2) -> e1.getValue() > e2.getValue() ? e1 : e2); This way you can still utilize all of the cores locally when calculating the most frequent element. Remove specific entries Distributed streams can also be used as a way to modify data where it lives. For example you may want to remove all entries in your cache that contain a specific word. public class RemoveBadWords { public static void main(String[] args) { // Lines removed String word = .. c1.entrySet().parallelStream() .filter(e -> e.getValue().contains(word)) .forEach((c, e) -> c.remove(e.getKey())); If we carefully note what is serialized and what is not, we notice that only the word along with the operations are serialized across to other nods as it is captured by the lambda. However the real saving piece is that the cache operation is performed on the primary owner thus reducing the amount of network traffic required to remove these values from the cache. The cache is not captured by the lambda as we provide a special BiConsumer method override that when invoked on each node passes the cache to the BiConsumer One thing to keep in mind using the forEach command in this manner is that the underlying stream obtains no locks. The cache remove operation will still obtain locks naturally, but the value could have changed from what the stream saw. That means that the entry could have been changed after the stream read it but the remove actually removed it. We have specifically added a new variant which is called LockedStream . Plenty of other examples The Streams API is a JRE tool and there are lots of examples for using it. Just remember that your operations need to be Serializable in some way. | [
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains(\"JBoss\")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));",
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains(\"Jboss\")) .collect(CacheCollectors.serializableCollector(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)));",
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(e -> e.getValue().contains(\"Jboss\")) .collect(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));",
"Map<Object, String> jbossValues = map.entrySet().stream() .filter((Serializable & Predicate<Map.Entry<Object, String>>) e -> e.getValue().contains(\"Jboss\")) .collect(CacheCollectors.serializableCollector(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)));",
"Map<Object, String> jbossValues = cache.entrySet().stream() .filter(new ContainsFilter(\"Jboss\")) .collect(() -> Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); class ContainsFilter implements Predicate<Map.Entry<Object, String>> { private final String target; ContainsFilter(String target) { this.target = target; } @Override public boolean test(Map.Entry<Object, String> e) { return e.getValue().contains(target); } } class JbossFilterExternalizer implements AdvancedExternalizer<ContainsFilter> { @Override public Set<Class<? extends ContainsFilter>> getTypeClasses() { return Util.asSet(ContainsFilter.class); } @Override public Integer getId() { return CUSTOM_ID; } @Override public void writeObject(ObjectOutput output, ContainsFilter object) throws IOException { output.writeUTF(object.target); } @Override public ContainsFilter readObject(ObjectInput input) throws IOException, ClassNotFoundException { return new ContainsFilter(input.readUTF()); } }",
"Map<Object, String> map = (Map<Object, String>) cache.entrySet().stream() .filter(new ContainsFilter(\"Jboss\")) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); class ToMapCollectorSupplier<K, U> implements Supplier<Collector<Map.Entry<K, U>, ?, Map<K, U>>> { static final ToMapCollectorSupplier INSTANCE = new ToMapCollectorSupplier(); private ToMapCollectorSupplier() { } @Override public Collector<Map.Entry<K, U>, ?, Map<K, U>> get() { return Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue); } } class ToMapCollectorSupplierExternalizer implements AdvancedExternalizer<ToMapCollectorSupplier> { @Override public Set<Class<? extends ToMapCollectorSupplier>> getTypeClasses() { return Util.asSet(ToMapCollectorSupplier.class); } @Override public Integer getId() { return CUSTOM_ID; } @Override public void writeObject(ObjectOutput output, ToMapCollectorSupplier object) throws IOException { } @Override public ToMapCollectorSupplier readObject(ObjectInput input) throws IOException, ClassNotFoundException { return ToMapCollectorSupplier.INSTANCE; } }",
"CacheStream<Map.Entry<Object, String>> stream = cache.entrySet().stream(); stream.timeout(1, TimeUnit.MINUTES);",
"public class WordCountExample { /** * In this example replace c1 and c2 with * real Cache references * * @param args */ public static void main(String[] args) { Cache<String, String> c1 = ...; Cache<String, String> c2 = ...; c1.put(\"1\", \"Hello world here I am\"); c2.put(\"2\", \"Infinispan rules the world\"); c1.put(\"3\", \"JUDCon is in Boston\"); c2.put(\"4\", \"JBoss World is in Boston as well\"); c1.put(\"12\",\"JBoss Application Server\"); c2.put(\"15\", \"Hello world\"); c1.put(\"14\", \"Infinispan community\"); c2.put(\"15\", \"Hello world\"); c1.put(\"111\", \"Infinispan open source\"); c2.put(\"112\", \"Boston is close to Toronto\"); c1.put(\"113\", \"Toronto is a capital of Ontario\"); c2.put(\"114\", \"JUDCon is cool\"); c1.put(\"211\", \"JBoss World is awesome\"); c2.put(\"212\", \"JBoss rules\"); c1.put(\"213\", \"JBoss division of RedHat \"); c2.put(\"214\", \"RedHat community\"); Map<String, Long> wordCountMap = c1.entrySet().parallelStream() .map(e -> e.getValue().split(\"\\\\s\")) .flatMap(Arrays::stream) .collect(() -> Collectors.groupingBy(Function.identity(), Collectors.counting())); } }",
"public class WordCountExample { public static void main(String[] args) { // Lines removed String mostFrequentWord = c1.entrySet().parallelStream() .map(e -> e.getValue().split(\"\\\\s\")) .flatMap(Arrays::stream) .collect(() -> Collectors.collectingAndThen( Collectors.groupingBy(Function.identity(), Collectors.counting()), wordCountMap -> { String mostFrequent = null; long maxCount = 0; for (Map.Entry<String, Long> e : wordCountMap.entrySet()) { int count = e.getValue().intValue(); if (count > maxCount) { maxCount = count; mostFrequent = e.getKey(); } } return mostFrequent; })); }",
"public class WordFrequencyExample { public static void main(String[] args) { // Lines removed Map<String, Long> wordCount = c1.entrySet().parallelStream() .map(e -> e.getValue().split(\"\\\\s\")) .flatMap(Arrays::stream) .collect(() -> Collectors.groupingBy(Function.identity(), Collectors.counting())); Optional<Map.Entry<String, Long>> mostFrequent = wordCount.entrySet().parallelStream().reduce( (e1, e2) -> e1.getValue() > e2.getValue() ? e1 : e2);",
"public class RemoveBadWords { public static void main(String[] args) { // Lines removed String word = .. c1.entrySet().parallelStream() .filter(e -> e.getValue().contains(word)) .forEach((c, e) -> c.remove(e.getKey()));"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/streams_streams |
Chapter 1. Compute service (nova) functionality | Chapter 1. Compute service (nova) functionality You use the Compute (nova) service to create, provision, and manage virtual machine instances and bare metal servers in a Red Hat OpenStack Platform (RHOSP) environment. The Compute service abstracts the underlying hardware that it runs on, rather than exposing specifics about the underlying host platforms. For example, rather than exposing the types and topologies of CPUs running on hosts, the Compute service exposes a number of virtual CPUs (vCPUs) and allows for overcommitting of these vCPUs. The Compute service uses the KVM hypervisor to execute Compute service workloads. The libvirt driver interacts with QEMU to handle all interactions with KVM, and enables the creation of virtual machine instances. To create and provision instances, the Compute service interacts with the following RHOSP services: Identity (keystone) service for authentication. Placement service for resource inventory tracking and selection. Image Service (glance) for disk and instance images. Networking (neutron) service for provisioning the virtual or physical networks that instances connect to on boot. The Compute service consists of daemon processes and services, named nova-* . The following are the core Compute services: Compute service ( nova-compute ) This service creates, manages and terminates instances by using the libvirt for KVM or QEMU hypervisor APIs, and updates the database with instance states. Compute conductor ( nova-conductor ) This service mediates interactions between the Compute service and the database, which insulates Compute nodes from direct database access. Do not deploy this service on nodes where the nova-compute service runs. Compute scheduler ( nova-scheduler ) This service takes an instance request from the queue and determines on which Compute node to host the instance. Compute API ( nova-api ) This service provides the external REST API to users. API database This database tracks instance location information, and provides a temporary location for instances that are built but not scheduled. In multi-cell deployments, this database also contains cell mappings that specify the database connection for each cell. Cell database This database contains most of the information about instances. It is used by the API database, the conductor, and the Compute services. Message queue This messaging service is used by all services to communicate with each other within the cell and with the global services. Compute metadata This service stores data specific to instances. Instances access the metadata service at http://169.254.169.254 or over IPv6 at the link-local address fe80::a9fe:a9fe. The Networking (neutron) service is responsible for forwarding requests to the metadata API server. You must use the NeutronMetadataProxySharedSecret parameter to set a secret keyword in the configuration of both the Networking service and the Compute service to allow the services to communicate. The Compute metadata service can be run globally, as part of the Compute API, or in each cell. You can deploy more than one Compute node. The hypervisor that operates instances runs on each Compute node. Each Compute node requires a minimum of two network interfaces. The Compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances through security groups. By default, director installs the overcloud with a single cell for all Compute nodes. This cell contains all the Compute services and databases that control and manage the virtual machine instances, and all the instances and instance metadata. For larger deployments, you can deploy the overcloud with multiple cells to accommodate a larger number of Compute nodes. You can add cells to your environment when you install a new overcloud or at any time afterwards. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_compute-service-nova-functionality_about-compute |
Part I. Upgrading your Red Hat build of OptaPlanner projects to OptaPlanner 8 | Part I. Upgrading your Red Hat build of OptaPlanner projects to OptaPlanner 8 If you have OptaPlanner projects that you created with the OptaPlanner 7 or earlier pubic API and you want to upgrade your project code to OptaPlanner 8, review the information in this guide. This guide also includes changes to implementation classes which are outside of the pubic API. The OptaPlanner public API is a subset of the OptaPlanner source code that enables you to interact with OptaPlanner through Java code. So that you can upgrade to higher OptaPlanner versions within the same major release, OptaPlanner follows semantic versioning . This means that you can upgrade from OptaPlanner 7.44 to OptaPlanner 7.48 for example without breaking your code that uses the OptaPlanner public API. The OptaPlanner public API classes are compatible within the versions of a major OptaPlanner release. However, when Red Hat releases a new major release, disrupting changes are sometimes introduced to the public API. OptaPlanner 8 is a new major release and some of the changes to the public API are not are not compatible with earlier versions of OptaPlanner. OptaPlanner 8 will be the foundation for the 8.x series for the few years. The changes to the public API that are not compatible with earlier versions that were required for this release were made for the long term benefit of this project. Table 1. Red Hat Decision Manager and Red Hat build of OptaPlanner versions Decision Manager OptaPlanner 7.7 7.33 7.8 7.39 7.9 7.44 7.10 7.48 7.11 8.5 Every upgrade note has a label that indicates how likely it is that your code will be affected by that change. The following table describes each label: Table 2. Upgrade impact labels Label Impact Major Likely to affect your code. Minor Unlikely to affect your code, especially if you followed the examples, unless you have customized the code extensively. Any changes that are not compatible with earlier versions of OptaPlanner are annotated with the Public API tag. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/assembly-optimizer-migration-8_developing-solvers |
Chapter 4. Configuring a Docker registry to use Red Hat build of Keycloak | Chapter 4. Configuring a Docker registry to use Red Hat build of Keycloak Note Docker authentication is disabled by default. To enable see the Enabling and disabling features chapter. This section describes how you can configure a Docker registry to use Red Hat build of Keycloak as its authentication server. For more information on how to set up and configure a Docker registry, see the Docker Registry Configuration Guide . 4.1. Docker registry configuration file installation For users with more advanced Docker registry configurations, it is generally recommended to provide your own registry configuration file. The Red Hat build of Keycloak Docker provider supports this mechanism via the Registry Config File Format Option. Choosing this option will generate output similar to the following: This output can then be copied into any existing registry config file. See the registry config file specification for more information on how the file should be set up, or start with a basic example . Warning Don't forget to configure the rootcertbundle field with the location of the Red Hat build of Keycloak realm's public key. The auth configuration will not work without this argument. 4.2. Docker registry environment variable override installation Often times it is appropriate to use a simple environment variable override for develop or POC Docker registries. While this approach is usually not recommended for production use, it can be helpful when one requires quick-and-dirty way to stand up a registry. Simply use the Variable Override Format Option from the client details, and an output should appear like the one below: Warning Don't forget to configure the REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE override with the location of the Red Hat build of Keycloak realm's public key. The auth configuration will not work without this argument. 4.3. Docker Compose YAML File Warning This installation method is meant to be an easy way to get a docker registry authenticating against a Red Hat build of Keycloak server. It is intended for development purposes only and should never be used in a production or production-like environment. The zip file installation mechanism provides a quickstart for developers who want to understand how the Red Hat build of Keycloak server can interact with the Docker registry. In order to configure: Procedure From the desired realm, create a client configuration. At this point you will not have a Docker registry - the quickstart will take care of that part. Choose the "Docker Compose YAML" option from the from Action menu and select the Download adapter config option to download the ZIP file. Unzip the archive to the desired location, and open the directory. Start the Docker registry with docker-compose up Note it is recommended that you configure the Docker registry client in a realm other than 'master', since the HTTP Basic auth flow will not present forms. Once the above configuration has taken place, and the keycloak server and Docker registry are running, docker authentication should be successful: | [
"auth: token: realm: http://localhost:8080/realms/master/protocol/docker-v2/auth service: docker-test issuer: http://localhost:8080/realms/master",
"REGISTRY_AUTH_TOKEN_REALM: http://localhost:8080/realms/master/protocol/docker-v2/auth REGISTRY_AUTH_TOKEN_SERVICE: docker-test REGISTRY_AUTH_TOKEN_ISSUER: http://localhost:8080/realms/master",
"docker login localhost:5000 -u USDusername Password: ******* Login Succeeded"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/securing_applications_and_services_guide/configuring_a_docker_registry_to_use_red_hat_build_of_keycloak |
Chapter 1. Overview of Security Topics | Chapter 1. Overview of Security Topics Due to the increased reliance on powerful, networked computers to help run businesses and keep track of our personal information, entire industries have been formed around the practice of network and computer security. Enterprises have solicited the knowledge and skills of security experts to properly audit systems and tailor solutions to fit the operating requirements of their organization. Because most organizations are increasingly dynamic in nature, their workers are accessing critical company IT resources locally and remotely, hence the need for secure computing environments has become more pronounced. Unfortunately, many organizations (as well as individual users) regard security as more of an afterthought, a process that is overlooked in favor of increased power, productivity, convenience, ease of use, and budgetary concerns. Proper security implementation is often enacted postmortem - after an unauthorized intrusion has already occurred. Taking the correct measures prior to connecting a site to an untrusted network, such as the Internet, is an effective means of thwarting many attempts at intrusion. Note This document makes several references to files in the /lib directory. When using 64-bit systems, some of the files mentioned may instead be located in /lib64 . 1.1. What is Computer Security? Computer security is a general term that covers a wide area of computing and information processing. Industries that depend on computer systems and networks to conduct daily business transactions and access critical information regard their data as an important part of their overall assets. Several terms and metrics have entered our daily business vocabulary, such as total cost of ownership (TCO), return on investment (ROI), and quality of service (QoS). Using these metrics, industries can calculate aspects such as data integrity and high-availability (HA) as part of their planning and process management costs. In some industries, such as electronic commerce, the availability and trustworthiness of data can mean the difference between success and failure. 1.1.1. Standardizing Security Enterprises in every industry rely on regulations and rules that are set by standards-making bodies such as the American Medical Association (AMA) or the Institute of Electrical and Electronics Engineers (IEEE). The same ideals hold true for information security. Many security consultants and vendors agree upon the standard security model known as CIA, or Confidentiality, Integrity, and Availability . This three-tiered model is a generally accepted component to assessing risks of sensitive information and establishing security policy. The following describes the CIA model in further detail: Confidentiality - Sensitive information must be available only to a set of pre-defined individuals. Unauthorized transmission and usage of information should be restricted. For example, confidentiality of information ensures that a customer's personal or financial information is not obtained by an unauthorized individual for malicious purposes such as identity theft or credit fraud. Integrity - Information should not be altered in ways that render it incomplete or incorrect. Unauthorized users should be restricted from the ability to modify or destroy sensitive information. Availability - Information should be accessible to authorized users any time that it is needed. Availability is a warranty that information can be obtained with an agreed-upon frequency and timeliness. This is often measured in terms of percentages and agreed to formally in Service Level Agreements (SLAs) used by network service providers and their enterprise clients. 1.1.2. Cryptographic Software and Certifications The following Red Hat Knowledgebase article provides an overview of the Red Hat Enterprise Linux core crypto components, documenting which are they, how are they selected, how are they integrated into the operating system, how do they support hardware security modules and smart cards, and how do crypto certifications apply to them. RHEL7 Core Crypto Components | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/chap-Overview_of_Security_Topics |
Chapter 3. Creating an application using .NET 8.0 | Chapter 3. Creating an application using .NET 8.0 Learn how to create a C# hello-world application. Procedure Create a new Console application in a directory called my-app : The output returns: A simple Hello World console application is created from a template. The application is stored in the specified my-app directory. Verification steps Run the project: The output returns: | [
"dotnet new console --output my-app",
"The template \"Console Application\" was created successfully. Processing post-creation actions Running 'dotnet restore' on my-app /my-app.csproj Determining projects to restore Restored /home/ username / my-app /my-app.csproj (in 67 ms). Restore succeeded.",
"dotnet run --project my-app",
"Hello World!"
] | https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_8/creating-an-application-using-dotnet_getting-started-with-dotnet-on-rhel-8 |
Chapter 4. Resource Management | Chapter 4. Resource Management Network priority cgroup resource controller Red Hat Enterprise Linux 6.3 introduces the Network Priority ( net_prio ) resource controller, which provides a way to dynamically set the priority of network traffic per each network interface for applications within various cgroups. For more information, refer to the Resource Management Guide . OOM control and notification API for cgroups The memory resource controller implements an Out-of-Memory (OOM) notifier which uses the new notification API. When enabled (by executing echo 1 > memory.oom_control ), an application is notified via eventfd when an OOM occurs. Note that OOM notification does not function for root cgroups. New numad package The numad package provides a daemon for NUMA (Non-Uniform Memory Architecture) systems that monitors NUMA characteristics. As an alternative to manual static CPU pinning and memory assignment, numad provides dynamic adjustment to minimize memory latency on an ongoing basis. The package also provides an interface that can be used to query the numad daemon for the best manual placement of an application. The numad package is introduced as a Technology Preview. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_release_notes/resource_management |
Chapter 12. What huge pages do and how they are consumed by applications | Chapter 12. What huge pages do and how they are consumed by applications 12.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Container Platform, applications in a pod can allocate and consume pre-allocated huge pages. 12.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . 12.3. Consuming huge pages resources using the Downward API You can use the Downward API to inject information about the huge pages resources that are consumed by a container. You can inject the resource allocation as environment variables, a volume plugin, or both. Applications that you develop and run in the container can determine the resources that are available by reading the environment variables or files in the specified volumes. Procedure Create a hugepages-volume-pod.yaml file that is similar to the following example: apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ "IPC_LOCK" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: "1Gi" cpu: "1" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: "hugepages_1G_request" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi <.> Specifies to read the resource use from requests.hugepages-1Gi and expose the value as the REQUESTS_HUGEPAGES_1GI environment variable. <.> Specifies to read the resource use from requests.hugepages-1Gi and expose the value as the file /etc/podinfo/hugepages_1G_request . Create the pod from the hugepages-volume-pod.yaml file: USD oc create -f hugepages-volume-pod.yaml Verification Check the value of the REQUESTS_HUGEPAGES_1GI environment variable: USD oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') \ -- env | grep REQUESTS_HUGEPAGES_1GI Example output REQUESTS_HUGEPAGES_1GI=2147483648 Check the value of the /etc/podinfo/hugepages_1G_request file: USD oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') \ -- cat /etc/podinfo/hugepages_1G_request Example output 2 Additional resources Allowing containers to consume Downward API objects 12.4. Configuring huge pages at boot time Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages object USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. 12.5. Disabling Transparent Huge Pages Transparent Huge Pages (THP) attempt to automate most aspects of creating, managing, and using huge pages. Since THP automatically manages the huge pages, this is not always handled optimally for all types of workloads. THP can lead to performance regressions, since many applications handle huge pages on their own. Therefore, consider disabling THP. The following steps describe how to disable THP using the Node Tuning Operator (NTO). Procedure Create a file with the following content and name it thp-disable-tuned.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker Create the Tuned object: USD oc create -f thp-disable-tuned.yaml Check the list of active profiles: USD oc get profile -n openshift-cluster-node-tuning-operator Verification Log in to one of the nodes and do a regular THP check to verify if the nodes applied the profile successfully: USD cat /sys/kernel/mm/transparent_hugepage/enabled Example output always madvise [never] | [
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ \"IPC_LOCK\" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: \"1Gi\" cpu: \"1\" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: \"hugepages_1G_request\" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi",
"oc create -f hugepages-volume-pod.yaml",
"oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- env | grep REQUESTS_HUGEPAGES_1GI",
"REQUESTS_HUGEPAGES_1GI=2147483648",
"oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- cat /etc/podinfo/hugepages_1G_request",
"2",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker",
"oc create -f thp-disable-tuned.yaml",
"oc get profile -n openshift-cluster-node-tuning-operator",
"cat /sys/kernel/mm/transparent_hugepage/enabled",
"always madvise [never]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/scalability_and_performance/what-huge-pages-do-and-how-they-are-consumed |
6.3. Recovering Physical Volume Metadata | 6.3. Recovering Physical Volume Metadata If the volume group metadata area of a physical volume is accidentally overwritten or otherwise destroyed, you will get an error message indicating that the metadata area is incorrect, or that the system was unable to find a physical volume with a particular UUID. You may be able to recover the data from the physical volume by writing a new metadata area on the physical volume specifying the same UUID as the lost metadata. Warning You should not attempt this procedure with a working LVM logical volume. You will lose your data if you specify the incorrect UUID. The following example shows the sort of output you may see if the metadata area is missing or corrupted. You may be able to find the UUID for the physical volume that was overwritten by looking in the /etc/lvm/archive directory. Look in the file VolumeGroupName_xxxx .vg for the last known valid archived LVM metadata for that volume group. Alternately, you may find that deactivating the volume and setting the partial ( -P ) argument will enable you to find the UUID of the missing corrupted physical volume. Use the --uuid and --restorefile arguments of the pvcreate command to restore the physical volume. The following example labels the /dev/sdh1 device as a physical volume with the UUID indicated above, FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk . This command restores the physical volume label with the metadata information contained in VG_00050.vg , the most recent good archived metadata for the volume group. The restorefile argument instructs the pvcreate command to make the new physical volume compatible with the old one on the volume group, ensuring that the new metadata will not be placed where the old physical volume contained data (which could happen, for example, if the original pvcreate command had used the command line arguments that control metadata placement, or if the physical volume was originally created using a different version of the software that used different defaults). The pvcreate command overwrites only the LVM metadata areas and does not affect the existing data areas. You can then use the vgcfgrestore command to restore the volume group's metadata. You can now display the logical volumes. The following commands activate the volumes and display the active volumes. If the on-disk LVM metadata takes as least as much space as what overrode it, this command can recover the physical volume. If what overrode the metadata went past the metadata area, the data on the volume may have been affected. You might be able to use the fsck command to recover that data. | [
"lvs -a -o +devices Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find all physical volumes for volume group VG. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find all physical volumes for volume group VG.",
"vgchange -an --partial Partial mode. Incomplete volume groups will be activated read-only. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.",
"pvcreate --uuid \"FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk\" --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 Physical volume \"/dev/sdh1\" successfully created",
"vgcfgrestore VG Restored volume group VG",
"lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe VG -wi--- 300.00G /dev/sdh1 (0),/dev/sda1(0) stripe VG -wi--- 300.00G /dev/sdh1 (34728),/dev/sdb1(0)",
"lvchange -ay /dev/VG/stripe lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe VG -wi-a- 300.00G /dev/sdh1 (0),/dev/sda1(0) stripe VG -wi-a- 300.00G /dev/sdh1 (34728),/dev/sdb1(0)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/mdatarecover |
Chapter 2. Architecture of OpenShift Data Foundation | Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from the Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on installer-provisioned or user-provisioned infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Tip For IBM Power, see Installing on IBM Power . 2.1. About operators Red Hat OpenShift Data Foundation comprises of three main operators, which codify administrative tasks and custom resources so that you can easily automate the task and resource characteristics. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that draws on other operators in specific tested ways to codify and enforce the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment. The rook-ceph and noobaa operators provide the storage cluster resource that wraps these resources. Rook-ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services Object Bucket Claims (OBCs) made against it in on-premises environments. Additionally, for internal mode clusters, it provides the ceph cluster resource, which manages the deployments and services representing the following: Object Storage Daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) RADOS Object Gateways (RGWs) on-premises only Multicloud Object Gateway operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway (MCG) object service. It creates an object storage class and services the OBCs made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. Note OpenShift Data Foundation's default configuration for MCG is optimized for low resource consumption and not performance. If you plan to use MCG often, see information about increasing resource limits in the knowledebase article Performance tuning guide for Multicloud Object Gateway . 2.2. Storage cluster deployment approaches The growing list of operating modalities is an evidence that flexibility is a core tenet of Red Hat OpenShift Data Foundation. This section provides you with information that will help you to select the most appropriate approach for your environments. You can deploy Red Hat OpenShift Data Foundation either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. You can use the internal-attached device approach in the graphical user interface (GUI) to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications. The operators in Red Hat OpenShift Container Platform manages these applications. A simple deployment is best for situations where, Storage requirements are not clear. Red Hat OpenShift Data Foundation services runs co-resident with the applications. Creating a node instance of a specific size is difficult, for example, on bare metal. For Red Hat OpenShift Data Foundation to run co-resident with the applications, the nodes must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes. Note PowerVC dynamically provisions the SAN volumes. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Red Hat OpenShift Container Platform manages these infrastructure nodes. An optimized approach is best for situations when, Storage requirements are clear. Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes. Creating a node instance of a specific size is easy, for example, on cloud, virtualized environment, and so on. 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when, Storage requirements are significant (600+ storage devices). Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team, Site Reliability Engineering (SRE), storage, and so on, needs to manage the external cluster providing storage services. Possibly a pre-existing one. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that the containers are running, and maintain network communication and separation between the pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, ensure that you use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, you require a minimal cluster of 3 worker nodes. Make sure that the nodes are spread across 3 different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, you need to attach the local storage devices, or portable storage devices to the worker nodes dynamically. When OpenShift Data Foundation is deployed in external mode, it runs on multiple nodes. This allows Kubernetes to reschedule on the available nodes in case of a failure. Note OpenShift Data Foundation requires the same number of subsciptions as OpenShift Container Platform. However, if OpenShift Data Foundation is running on infra nodes, OpenShift does not require OpenShift Container Platform subscription for these nodes. Therefore, the OpenShift Data Foundation control plane does not require additional OpenShift Container Platform and OpenShift Data Foundation subscriptions. For more information, see Chapter 6, Subscriptions . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/odf-architecture_rhodf |
2.7. GRUB Menu Configuration File | 2.7. GRUB Menu Configuration File The configuration file ( /boot/grub/grub.conf ), which is used to create the list of operating systems to boot in GRUB's menu interface, essentially allows the user to select a pre-set group of commands to execute. The commands given in Section 2.6, "GRUB Commands" can be used, as well as some special commands that are only available in the configuration file. 2.7.1. Configuration File Structure The GRUB menu interface configuration file is /boot/grub/grub.conf . The commands to set the global preferences for the menu interface are placed at the top of the file, followed by stanzas for each operating kernel or operating system listed in the menu. The following is a very basic GRUB menu configuration file designed to boot either Red Hat Enterprise Linux or Microsoft Windows 2000: This file configures GRUB to build a menu with Red Hat Enterprise Linux as the default operating system and sets it to autoboot after 10 seconds. Two sections are given, one for each operating system entry, with commands specific to the system disk partition table. Note Note that the default is specified as an integer. This refers to the first title line in the GRUB configuration file. For the Windows section to be set as the default in the example, change the default=0 to default=1 . Configuring a GRUB menu configuration file to boot multiple operating systems is beyond the scope of this chapter. Consult Section 2.9, "Additional Resources" for a list of additional resources. | [
"default=0 timeout=10 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux AS (2.6.8-1.523) root (hd0,0) kernel /vmlinuz-2.6.8-1.523 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.8-1.523.img section to load Windows title Windows rootnoverify (hd0,0) chainloader +1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-grub-configfile |
Linux Domain Identity, Authentication, and Policy Guide | Linux Domain Identity, Authentication, and Policy Guide Red Hat Enterprise Linux 7 Using Red Hat Identity Management in Linux environments Florian Delehaye Red Hat Customer Content Services [email protected] Marc Muehlfeld Red Hat Customer Content Services Filip Hanzelka Red Hat Customer Content Services Lucie Manaskova Red Hat Customer Content Services Aneta Steflova Petrova Red Hat Customer Content Services Tomas Capek Red Hat Customer Content Services Ella Deon Ballard Red Hat Customer Content Services | [
"lookup_family_order = ipv4_only",
"hostname server.example.com",
"ip addr show 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:4a:10:4e:33 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1 /24 brd 192.0.2.255 scope global dynamic eth0 valid_lft 106694sec preferred_lft 106694sec inet6 2001:DB8::1111 /32 scope global dynamic valid_lft 2591521sec preferred_lft 604321sec inet6 fe80::56ee:75ff:fe2b:def6/64 scope link valid_lft forever preferred_lft forever",
"dig +short server.example.com A 192.0.2.1",
"dig +short server.example.com AAAA 2001:DB8::1111",
"dig +short -x 192.0.2.1 server.example.com",
"dig +short -x 2001:DB8::1111 server.example.com",
"dig +dnssec @ IP_address_of_the_DNS_forwarder . SOA",
";; ->>HEADER<<- opcode: QUERY, status: NOERROR , id: 48655 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; ANSWER SECTION: . 31679 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2015100701 1800 900 604800 86400 . 31679 IN RRSIG SOA 8 0 86400 20151017170000 20151007160000 62530 . GNVz7SQs [...]",
"127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.0.2.1 server.example.com server 2001:DB8::1111 server.example.com server",
"systemctl status firewalld.service",
"systemctl start firewalld.service systemctl enable firewalld.service",
"firewall-cmd --permanent --add-port={80/tcp,443/tcp, list_of_ports }",
"firewall-cmd --permanent --add-service={freeipa-ldap, list_of_services }",
"firewall-cmd --reload",
"firewall-cmd --runtime-to-permanent --add-port={80/tcp,443/tcp,389/tcp,636/tcp,88/tcp,88/udp,464/tcp,464/udp,53/tcp,53/udp,123/udp}",
"yum install ipa-server",
"yum install ipa-server ipa-server-dns",
"dig @ IP_address +norecurse +short ipa.example.com. NS",
"acl authorized { 192.0.2.0/24 ; 198.51.100.0/24 ; }; options { allow-query { any; }; allow-recursion { authorized ; }; };",
"ipa-server-install --auto-reverse --allow-zone-overlap",
"ipa-server-install",
"Do you want to configure integrated DNS (BIND)? [no]: yes",
"Server host name [server.example.com]: Please confirm the domain name [example.com]: Please provide a realm name [EXAMPLE.COM]:",
"Directory Manager password: IPA admin password:",
"Do you want to configure DNS forwarders? [yes]:",
"Do you want to search for missing reverse zones? [yes]:",
"Do you want to create reverse zone for IP 192.0.2.1 [yes]: Please specify the reverse zone name [2.0.192.in-addr.arpa.]: Using reverse zone(s) 2.0.192.in-addr.arpa.",
"Continue to configure the system with these values? [no]: yes",
"kinit admin",
"ipa user-find admin -------------- 1 user matched -------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 939000000 GID: 939000000 Account disabled: False Password: True Kerberos keys available: True ---------------------------- Number of entries returned 1 ----------------------------",
"ipa-server-install",
"Do you want to configure integrated DNS (BIND)? [no]:",
"Server host name [server.example.com]: Please confirm the domain name [example.com]: Please provide a realm name [EXAMPLE.COM]:",
"Directory Manager password: IPA admin password:",
"Continue to configure the system with these values? [no]: yes",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"kinit admin",
"ipa user-find admin -------------- 1 user matched -------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 939000000 GID: 939000000 Account disabled: False Password: True Kerberos keys available: True ---------------------------- Number of entries returned 1 ----------------------------",
"Configuring certificate server (pki-tomcatd): Estimated time 3 minutes 30 seconds [1/8]: creating certificate server user [2/8]: configuring certificate server instance The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-server-install as: /sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate",
"ipa-server-install --external-cert-file= /tmp/servercert20110601.pem --external-cert-file= /tmp/cacert.pem",
"ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/ configuration_file ' returned non-zero exit status 1 Configuration of CA failed",
"ipa-server-install --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --ca-cert-file ca.crt",
"ipa-server-install --realm EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"kinit admin",
"ipa user-find admin -------------- 1 user matched -------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 939000000 GID: 939000000 Account disabled: False Password: True Kerberos keys available: True ---------------------------- Number of entries returned 1 ----------------------------",
"ipa server-del server.example.com",
"ipa-server-install --uninstall",
"ipactl stop",
"yum install ipa-client",
"Client hostname: client.example.com Realm: EXAMPLE.COM DNS Domain: example.com IPA Server: server.example.com BaseDN: dc=example,dc=com Continue to configure the system with these values? [no]: yes",
"User authorized to enroll computers: admin Password for [email protected]",
"Client configuration complete.",
"kinit admin",
"ipa host-add client.example.com --random -------------------------------------------------- Added host \"client.example.com\" -------------------------------------------------- Host name: client.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com",
"ipa-client-install --password 'W5YpARl=7M.n' --domain example.com --server server.example.com --unattended",
"kinit admin",
"ipa host-add client.example.com --password= secret",
"%packages @ X Window System @ Desktop @ Sound and Video ipa-client",
"%post --log=/root/ks-post.log Generate SSH keys to ensure that ipa-client-install uploads them to the IdM server /usr/sbin/sshd-keygen Run the client install script /usr/sbin/ipa-client-install --hostname= client.example.com --domain= EXAMPLE.COM --enable-dns-updates --mkhomedir -w secret --realm= EXAMPLE.COM --server= server.example.com",
"env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null getcert list env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null ipa-client-install",
"BASE dc=example,dc=com URI ldap://ldap.example.com #URI ldaps://server.example.com # modified by IPA #BASE dc=ipa,dc=example,dc=com # modified by IPA",
"[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"ipa-client-install --uninstall",
"ipa-client-install --force-join",
"User authorized to enroll computers: admin Password for [email protected]",
"ipa-client-install --keytab /tmp/krb5.keytab",
"ipa service-find client.example.com",
"ipa hostgroup-find client.example.com",
"ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM",
"ipa host-del client.example.com",
"ipa service-add service_name/new_host_name",
"kinit admin",
"ipa-replica-install --principal admin --admin-password admin_password",
"ipa-replica-install --principal admin --admin-password admin_password",
"kinit admin",
"ipa hostgroup-add-member ipaservers --hosts client.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.example.com, client.example.com ------------------------- Number of members added 1 -------------------------",
"ipa-replica-install",
"kinit admin",
"ipa host-add client.example.com --random -------------------------------------------------- Added host \"client.example.com\" -------------------------------------------------- Host name: client.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com",
"ipa hostgroup-add-member ipaservers --hosts client.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.example.com, client.example.com ------------------------- Number of members added 1 -------------------------",
"ipa-replica-install --password ' W5YpARl=7M.n '",
"ipa-replica-install --setup-dns --forwarder 192.0.2.1",
"DOMAIN= example.com NAMESERVER= replica",
"for i in _ldap._tcp _kerberos._tcp _kerberos._udp _kerberos-master._tcp _kerberos-master._udp _ntp._udp ; do dig @USD{NAMESERVER} USD{i}.USD{DOMAIN} srv +nocmd +noquestion +nocomments +nostats +noaa +noadditional +noauthority done | egrep \"^_\" _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server1.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server2.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server1.example.com.",
"ipa-replica-install --setup-ca",
"ipa-replica-install --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret",
"[admin@server1 ~]USD ipa user-add test_user --first= Test --last= User",
"[admin@server2 ~]USD ipa user-show test_user",
"ipactl start",
"ipactl stop",
"ipactl restart",
"[local_user@server ~]USD kinit Password for [email protected]:",
"[local_user@server ~]USD kinit admin Password for [email protected]:",
"klist Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 11/10/2015 08:35:45 11/10/2015 18:35:45 krbtgt/[email protected]",
"ipa user-add user_name",
"ipa help topics automember Auto Membership Rule. automount Automount caacl Manage CA ACL rules.",
"ipa help automember Auto Membership Rule. Bring clarity to the membership of hosts and users by configuring inclusive or exclusive regex patterns, you can automatically assign a new entries into a group or hostgroup based upon attribute information. EXAMPLES: Add the initial group or hostgroup: ipa hostgroup-add --desc=\"Web Servers\" webservers ipa group-add --desc=\"Developers\" devel",
"ipa help commands automember-add Add an automember rule. automember-add-condition Add conditions to an automember rule.",
"ipa automember-add --help Usage: ipa [global-options] automember-add AUTOMEMBER-RULE [options] Add an automember rule. Options: -h, --help show this help message and exit --desc=STR A description of this auto member rule",
"ipaUserSearchFields: uid,givenname,sn,telephonenumber,ou,title",
"ipa permission-add --permissions=read --permissions=write --permissions=delete",
"ipa permission-add --permissions={read,write,delete}",
"ipa certprofile-show certificate_profile --out= exported\\*profile.cfg",
"ipa user-find --------------- 4 users matched ---------------",
"ipa group-find keyword ---------------- 2 groups matched ----------------",
"ipa group-find --user= user_name",
"ipa group-find --no-user= user_name",
"ipa host-show server.example.com Host name: server.example.com Principal name: host/[email protected]",
"ipa config-mod --searchrecordslimit=500 --searchtimelimit=5",
"ipa user-find --sizelimit=200 --timelimit=120",
"https://server.example.com",
"[admin@server ~]USD ipa idoverrideuser-add 'Default Trust View' [email protected]",
"ipa-client-install --configure-firefox",
"scp /etc/krb5.conf root@ externalmachine.example.com :/etc/krb5_ipa.conf",
"export KRB5_CONFIG=/etc/krb5_ipa.conf",
"ipa topologysuffix-find --------------------------- 2 topology suffixes matched --------------------------- Suffix name: ca Managed LDAP suffix DN: o=ipaca Suffix name: domain Managed LDAP suffix DN: dc=example,dc=com ---------------------------- Number of entries returned 2 ----------------------------",
"ipa topologysegment-find Suffix name: domain ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------",
"ipa topologysegment-show Suffix name: domain Segment name: server1.example.com-to-server2.example.com Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both",
"ipa help topology",
"ipa topologysuffix-show --help",
"ipa topologysegment-add Suffix name: domain Left node: server1.example.com Right node: server2.example.com Segment name [server1.example.com-to-server2.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both",
"ipa topologysegment-show Suffix name: domain Segment name: new_segment Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both",
"ipa topologysegment-find Suffix name: domain ------------------ 8 segments matched ------------------ Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 8 ----------------------------",
"ipa topologysegment-del Suffix name: domain Segment name: new_segment ----------------------------- Deleted segment \"new_segment\" -----------------------------",
"ipa topologysegment-find Suffix name: domain ------------------ 7 segments matched ------------------ Segment name: server2.example.com-to-server3.example.com Left node: server2.example.com Right node: server3.example.com Connectivity: both ---------------------------- Number of entries returned 7 ----------------------------",
"ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ipa: ERROR: Server removal aborted: Removal of 'server1.example.com' leads to disconnected topology in suffix 'domain': Topology does not allow server server2.example.com to replicate with servers: server3.example.com server4.example.com",
"[user@server2 ~]USD ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ---------------------------------------------------------- Deleted IPA server \"server1.example.com\" ----------------------------------------------------------",
"ipa server-install --uninstall",
"ipa config-show IPA masters: server1.example.com, server2.example.com, server3.example.com IPA CA servers: server1.example.com, server2.example.com IPA NTP servers: server1.example.com, server2.example.com, server3.example.com IPA CA renewal master: server1.example.com",
"ipa server-show Server name: server.example.com Enabled server roles: CA server, DNS server, NTP server, KRA server",
"ipa server-find --servrole \"CA server\" --------------------- 2 IPA servers matched --------------------- Server name: server1.example.com Server name: server2.example.com ---------------------------- Number of entries returned 2 ----------------------------",
"ipa config-mod --ca-renewal-master-server new_ca_renewal_master.example.com IPA masters: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA CA servers: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA NTP servers: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA CA renewal master: new_ca_renewal_master.example.com",
"ipa-crlgen-manage status CRL generation: enabled",
"ipa-crlgen-manage disable",
"ipa-crlgen-manage enable",
"ipa server-state replica.idm.example.com --state=hidden",
"ipa server-state replica.idm.example.com --state=enabled",
"kinit admin",
"ipa domainlevel-get ----------------------- Current domain level: 0 -----------------------",
"kinit admin",
"ipa domainlevel-set 1 ----------------------- Current domain level: 1 -----------------------",
"yum update ipa-*",
"NSSProtocol TLSv1.0,TLSv1.1,TLSv1.2",
"systemctl restart httpd.service",
"getcert list -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" | grep post-save post-save command: /usr/lib64/ipa/certmonger/renew_ca_cert \"subsystemCert cert-pki-ca\"",
"yum update ipa-*",
"scp /usr/share/ipa/copy-schema-to-ca.py root@rhel6:/root/",
"python copy-schema-to-ca.py ipa : INFO Installed /etc/dirsrv/slapd-PKI-IPA//schema/60kerberos.ldif [... output truncated ...] ipa : INFO Schema updated successfully",
"ipa-replica-prepare rhel7.example.com --ip-address 192.0.2.1 Directory Manager (existing master) password: Preparing replica for rhel7.example.com from rhel6.example.com [... output truncated ...] The ipa-replica-prepare command was successful",
"scp /var/lib/ipa/replica-info-replica.example.com.gpg root@rhel7:/var/lib/ipa/",
"+ecdhe_rsa_aes_128_sha,+ecdhe_rsa_aes_256_sha",
"ipa-replica-install /var/lib/ipa/replica-info-rhel7.example.com.gpg --setup-ca --ip-address 192.0.2.1 --setup-dns --forwarder 192.0.2.20 Directory Manager (existing master) password: Checking DNS forwarders, please wait Run connection check to master [... output truncated ...] Client configuration complete.",
"ipactl status Directory Service: RUNNING [... output truncated ...] ipa: INFO: The ipactl command was successful",
"[root@rhel7 ~]USD kinit admin [root@rhel7 ~]USD ipa-csreplica-manage list rhel6.example.com: master rhel7.example.com: master",
"ipa-csreplica-manage list --verbose rhel7.example.com rhel7.example.com last init status: None last init ended: 1970-01-01 00:00:00+00:00 last update status: Error (0) Replica acquired successfully: Incremental update succeeded last update ended: 2017-02-13 13:55:13+00:00",
"getcert stop-tracking -d /var/lib/pki-ca/alias -n \"auditSigningCert cert-pki-ca\" Request \"20201127184547\" removed. getcert stop-tracking -d /var/lib/pki-ca/alias -n \"ocspSigningCert cert-pki-ca\" Request \"20201127184548\" removed. getcert stop-tracking -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" Request \"20201127184549\" removed. getcert stop-tracking -d /etc/httpd/alias -n ipaCert Request \"20201127184550\" removed.",
"cp /usr/share/ipa/ca_renewal /var/lib/certmonger/cas/ chmod 0600 /var/lib/certmonger/cas/ca_renewal",
"restorecon /var/lib/certmonger/cas/ca_renewal",
"service certmonger restart",
"getcert list-cas CA 'dogtag-ipa- retrieve -agent-submit': is-default: no ca-type: EXTERNAL helper-location: /usr/libexec/certmonger/dogtag-ipa-retrieve-agent-submit",
"grep internal= /var/lib/pki-ca/conf/password.conf",
"getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"auditSigningCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"auditSigningCert cert-pki-ca\"' -T \"auditSigningCert cert-pki-ca\" -P database_pin New tracking request \"20201127184743\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"ocspSigningCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"ocspSigningCert cert-pki-ca\"' -T \"ocspSigningCert cert-pki-ca\" -P database_pin New tracking request \"20201127184744\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"subsystemCert cert-pki-ca\"' -T \"subsystemCert cert-pki-ca\" -P database_pin New tracking request \"20201127184745\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /etc/httpd/alias -n ipaCert -C /usr/lib64/ipa/certmonger/restart_httpd -T ipaCert -p /etc/httpd/alias/pwdfile.txt New tracking request \"20201127184746\" added.",
"service pki-cad stop",
"ca.crl.MasterCRL.enableCRLCache= false ca.crl.MasterCRL.enableCRLUpdates= false",
"service pki-cad start",
"RewriteRule ^/ipa/crl/MasterCRL.bin https://rhel6.example.com/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL [L,R=301,NC]",
"service httpd restart",
"ipactl stop Stopping CA Service Stopping pki-ca: [ OK ] Stopping HTTP Service Stopping httpd: [ OK ] Stopping MEMCACHE Service Stopping ipa_memcached: [ OK ] Stopping DNS Service Stopping named: . [ OK ] Stopping KPASSWD Service Stopping Kerberos 5 Admin Server: [ OK ] Stopping KDC Service Stopping Kerberos 5 KDC: [ OK ] Stopping Directory Service Shutting down dirsrv: EXAMPLE-COM... [ OK ] PKI-IPA... [ OK ]",
"mkdir -p /home/idm/backup/",
"chown root:root /home/idm/backup/ chmod 700 /home/idm/backup/",
"mv /var/lib/ipa/backup/* /home/idm/backup/",
"rm -rf /var/lib/ipa/backup/",
"ln -s /home/idm/backup/ /var/lib/ipa/backup/",
"mkdir -p /home/idm/backup/",
"chown root:root /home/idm/backup/ chmod 700 /home/idm/backup/",
"mv /var/lib/ipa/backup/* /home/idm/backup/",
"mount -o bind /home/idm/backup/ /var/lib/ipa/backup/",
"/home/idm/backup/ /var/lib/ipa/backup/ none bind 0 0",
"TMPDIR= /path/to/backup ipa-backup",
"cat >keygen <<EOF > %echo Generating a standard key > Key-Type: RSA > Key-Length:2048 > Name-Real: IPA Backup > Name-Comment: IPA Backup > Name-Email: [email protected] > Expire-Date: 0 > %pubring /root/backup.pub > %secring /root/backup.sec > %commit > %echo done > EOF",
"gpg --batch --gen-key keygen gpg --no-default-keyring --secret-keyring /root/backup.sec --keyring /root/backup.pub --list-secret-keys",
"ipa-backup --gpg --gpg-keyring=/root/backup",
"/usr/share/ipa/html /root/.pki /etc/pki-ca /etc/pki/pki-tomcat /etc/sysconfig/pki /etc/httpd/alias /var/lib/pki /var/lib/pki-ca /var/lib/ipa/sysrestore /var/lib/ipa-client/sysrestore /var/lib/ipa/dnssec /var/lib/sss/pubconf/krb5.include.d/ /var/lib/authconfig/last /var/lib/certmonger /var/lib/ipa /var/run/dirsrv /var/lock/dirsrv",
"/etc/named.conf /etc/named.keytab /etc/resolv.conf /etc/sysconfig/pki-ca /etc/sysconfig/pki-tomcat /etc/sysconfig/dirsrv /etc/sysconfig/ntpd /etc/sysconfig/krb5kdc /etc/sysconfig/pki/ca/pki-ca /etc/sysconfig/ipa-dnskeysyncd /etc/sysconfig/ipa-ods-exporter /etc/sysconfig/named /etc/sysconfig/ods /etc/sysconfig/authconfig /etc/ipa/nssdb/pwdfile.txt /etc/pki/ca-trust/source/ipa.p11-kit /etc/pki/ca-trust/source/anchors/ipa-ca.crt /etc/nsswitch.conf /etc/krb5.keytab /etc/sssd/sssd.conf /etc/openldap/ldap.conf /etc/security/limits.conf /etc/httpd/conf/password.conf /etc/httpd/conf/ipa.keytab /etc/httpd/conf.d/ipa-pki-proxy.conf /etc/httpd/conf.d/ipa-rewrite.conf /etc/httpd/conf.d/nss.conf /etc/httpd/conf.d/ipa.conf /etc/ssh/sshd_config /etc/ssh/ssh_config /etc/krb5.conf /etc/ipa/ca.crt /etc/ipa/default.conf /etc/dirsrv/ds.keytab /etc/ntp.conf /etc/samba/smb.conf /etc/samba/samba.keytab /root/ca-agent.p12 /root/cacert.p12 /var/kerberos/krb5kdc/kdc.conf /etc/systemd/system/multi-user.target.wants/ipa.service /etc/systemd/system/multi-user.target.wants/sssd.service /etc/systemd/system/multi-user.target.wants/certmonger.service /etc/systemd/system/pki-tomcatd.target.wants/[email protected] /var/run/ipa/services.list /etc/opendnssec/conf.xml /etc/opendnssec/kasp.xml /etc/ipa/dnssec/softhsm2.conf /etc/ipa/dnssec/softhsm_pin_so /etc/ipa/dnssec/ipa-ods-exporter.keytab /etc/ipa/dnssec/ipa-dnskeysyncd.keytab /etc/idm/nssdb/cert8.db /etc/idm/nssdb/key3.db /etc/idm/nssdb/secmod.db /etc/ipa/nssdb/cert8.db /etc/ipa/nssdb/key3.db /etc/ipa/nssdb/secmod.db",
"/var/log/pki-ca /var/log/pki/ /var/log/dirsrv/slapd-PKI-IPA /var/log/httpd /var/log/ipaserver-install.log /var/log/kadmind.log /var/log/pki-ca-install.log /var/log/messages /var/log/ipaclient-install.log /var/log/secure /var/log/ipaserver-uninstall.log /var/log/pki-ca-uninstall.log /var/log/ipaclient-uninstall.log /var/named/data/named.run",
"ipa-restore /path/to/backup",
"ipa-restore --instance=IPA-REALM /path/to/backup",
"systemctl stop sssd",
"find /var/lib/sss/ ! -type d | xargs rm -f",
"systemctl start sssd",
"ipa-restore --gpg-keyring=/root/backup /path/to/backup",
"[jsmith@server ~]USD ipa selfservice-add \"Users can manage their own name details\" --permissions=write --attrs=givenname --attrs=displayname --attrs=title --attrs=initials ----------------------------------------------------------- Added selfservice \"Users can manage their own name details\" ----------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"[jsmith@server ~]USD ipa selfservice-mod \"Users can manage their own name details\" --attrs=givenname --attrs=displayname --attrs=title --attrs=initials --attrs=surname -------------------------------------------------------------- Modified selfservice \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"ipa delegation-add \"basic manager attrs\" --attrs=manager --attrs=title --attrs=employeetype --attrs=employeenumber --group=engineering_managers --membergroup=engineering -------------------------------------- Added delegation \"basic manager attrs\" -------------------------------------- Delegation name: basic manager attrs Permissions: write Attributes: manager, title, employeetype, employeenumber Member user group: engineering User group: engineering_managers",
"[jsmith@server ~]USD ipa delegation-mod \"basic manager attrs\" --attrs=manager --attrs=title --attrs=employeetype --attrs=employeenumber --attrs=displayname ----------------------------------------- Modified delegation \"basic manager attrs\" ----------------------------------------- Delegation name: basic manager attrs Permissions: write Attributes: manager, title, employeetype, employeenumber, displayname Member user group: engineering User group: engineering_managers",
"kinit admin ipa role-add --desc=\"User Administrator\" useradmin ------------------------ Added role \"useradmin\" ------------------------ Role name: useradmin Description: User Administrator",
"ipa role-add-privilege --privileges=\"User Administrators\" useradmin Role name: useradmin Description: User Administrator Privileges: user administrators ---------------------------- Number of privileges added 1 ----------------------------",
"ipa role-add-member --groups=useradmins useradmin Role name: useradmin Description: User Administrator Member groups: useradmins Privileges: user administrators ------------------------- Number of members added 1 -------------------------",
"cn=automount,dc=example,dc=com",
"(!(objectclass=posixgroup))",
"uid=*,cn=users,cn=accounts,dc=com",
"ipa permission-add \"dns admin permission\"",
"--bindtype=all",
"--permissions=read --permissions=write --permissions={read,write}",
"--attrs=description --attrs=automountKey --attrs={description,automountKey}",
"ipa permission-add \"manage service\" --permissions=all --type=service --attrs=krbprincipalkey --attrs=krbprincipalname --attrs=managedby",
"ipa permission-add \"manage automount locations\" --subtree=\"ldap://ldap.example.com:389/cn=automount,dc=example,dc=com\" --permissions=write --attrs=automountmapname --attrs=automountkey --attrs=automountInformation",
"ipa permission-add \"manage Windows groups\" --filter=\"(!(objectclass=posixgroup))\" --permissions=write --attrs=description",
"ipa permission-add ManageHost --permissions=\"write\" --subtree=cn=computers,cn=accounts,dc=testrelm,dc=com --attr=nshostlocation --memberof=admins",
"ipa permission-mod 'System: Modify Users' --type=group ipa: ERROR: invalid 'ipapermlocation': not modifiable on managed permissions",
"ipa permission-mod 'System: Modify Users' --excludedattrs=gecos ------------------------------------------ Modified permission \"System: Modify Users\"",
"[jsmith@server ~]USD ipa privilege-add \"managing filesystems\" --desc=\"for filesystems\"",
"[jsmith@server ~]USD ipa privilege-add-permission \"managing filesystems\" --permissions=\"managing automount\" --permissions=\"managing ftp services\"",
"authconfig --enablemkhomedir --update",
"ipa automountlocation-add userdirs Location: userdirs",
"ipa automountkey-add userdirs auto.direct --key=/share --info=\"-ro,soft, server.example.com:/home/share\" Key: /share Mount information: -ro,soft, server.example.com:/home/share",
"ipa user-add First name: first_name Last name: last_name User login [default_login]: custom_login",
"ipa stageuser-add stage_user_login --first= first_name --last= last_name --email= email_address",
"'(?!^[0-9]+USD)^[a-zA-Z0-9_.][a-zA-Z0-9_.-]*[a-zA-Z0-9_.USD-]?USD'",
"ipa config-mod --maxusername=64 Maximum username length: 64",
"ipa user-find --------------- 23 users matched --------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 1453200000 GID: 1453200000 Account disabled: False Password: True Kerberos keys available: True User login: user",
"ipa user-find --title= user_title --------------- 2 users matched --------------- User login: user Job Title: Title User login: user2 Job Title: Title",
"ipa user-find user --------------- 3 users matched --------------- User login: user User login: user2 User login: user3",
"ipa user-show user_login User login: user_login First name: first_name Last name: last_name",
"ipa stageuser-activate user_login ------------------------- Stage user user_login activated -------------------------",
"ipa user-del user_login -------------------- Deleted user \"user3\" --------------------",
"ipa user-del --preserve user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa stageuser-del user_login -------------------------- Deleted stage user \"user_login\" --------------------------",
"ipa user-del --continue user1 user2 user3",
"ipa user-undel user_login ------------------------------ Undeleted user account \"user_login\" ------------------------------",
"ipa user-stage user_login ------------------------------ Staged user account \"user_login\" ------------------------------",
"ipa user-mod user_login --title= new_title",
"ipa user-mod user --addattr=mobile= new_mobile_number -------------------- Modified user \"user\" -------------------- User login: user Mobile Telephone Number: mobile_number, new_mobile_number",
"ipa user-mod user --addattr=mobile= mobile_number_1 --addattr=mobile= mobile_number_2",
"ipa user-mod user --email= [email protected] ipa user-mod user --addattr=mail= [email protected]",
"ipa user-find User login: user First name: User Last name: User Home directory: /home/user Login shell: /bin/sh UID: 1453200009 GID: 1453200009 Account disabled: True Password: False Kerberos keys available: False",
"ipa user-disable user_login ---------------------------- Disabled user account \"user_login\" ----------------------------",
"ipa user-enable user_login ---------------------------- Enabled user account \"user_login\" ----------------------------",
"kinit admin",
"ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\" -------------------------------- Added role \"System Provisioning\" -------------------------------- Role name: System Provisioning Description: Responsible for provisioning stage users",
"ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\" Role name: System Provisioning Description: Responsible for provisioning stage users Privileges: Stage User Provisioning ---------------------------- Number of privileges added 1 ----------------------------",
"ipa user-add stage_user_admin --password First name: first_name Last name: last_name Password: Enter password again to verify:",
"ipa role-add-member \"System Provisioning\" --users=stage_user_admin Role name: System Provisioning Description: Responsible for provisioning stage users Member users: stage_user_admin Privileges: Stage User Provisioning ------------------------- Number of members added 1 -------------------------",
"ipa role-show \"System Provisioning\" -------------- 1 role matched -------------- Role name: System provisioning Description: Responsible for provisioning stage users Member users: stage_user_admin Privileges: Stage User Provisioning ---------------------------- Number of entries returned 1 ----------------------------",
"kinit stage_user_admin Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:",
"klist Ticket cache: KEYRING:persistent:0:krb_ccache_xIlCQDW Default principal: [email protected] Valid starting Expires Service principal 02/25/2016 11:42:20 02/26/2016 11:42:20 krbtgt/EXAMPLE.COM",
"ipa stageuser-add stage_user First name: first_name Last name: last_name ipa: ERROR: stage_user: stage user not found",
"ipa stageuser-show stage_user ipa: ERROR: stage_user: stage user not found",
"kinit admin Password for [email protected]: ipa stageuser-show stage_user User login: stage_user First name: Stage Last name: User",
"ipa user-add provisionator --first=provisioning --last=account --password",
"ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\"",
"ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\"",
"ipa role-add-member --users=provisionator \"System Provisioning\"",
"ipa user-add activator --first=activation --last=account --password",
"ipa role-add-member --users=activator \"User Administrator\"",
"ipa group-add service-accounts",
"ipa pwpolicy-add service-accounts --maxlife=10000 --minlife=0 --history=0 --minclasses=4 --minlength=20 --priority=1 --maxfail=0 --failinterval=1 --lockouttime=0",
"ipa group-add-member service-accounts --users={provisionator,activator}",
"kpasswd provisionator kpasswd activator",
"ipa-getkeytab -s example.com -p \"activator\" -k /etc/krb5.ipa-activation.keytab",
"#!/bin/bash kinit -k -i activator ipa stageuser-find --all --raw | grep \" uid:\" | cut -d \":\" -f 2 | while read uid; do ipa stageuser-activate USD{uid}; done",
"chmod 755 /usr/local/sbin/ipa-activate-all chown root:root /usr/local/sbin/ipa-activate-all",
"[Unit] Description=Scan IdM every minute for any stage users that must be activated [Service] Environment=KRB5_CLIENT_KTNAME=/etc/krb5.ipa-activation.keytab Environment=KRB5CCNAME=FILE:/tmp/krb5cc_ipa-activate-all ExecStart=/usr/local/sbin/ipa-activate-all",
"[Unit] Description=Scan IdM every minute for any stage users that must be activated [Timer] OnBootSec=15min OnUnitActiveSec=1min [Install] WantedBy=multi-user.target",
"systemctl enable ipa-activate-all.timer",
"dn: uid= user_login ,cn=staged users,cn=accounts,cn=provisioning,dc= example ,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: user_login sn: surname givenName: first_name cn: full_name",
"dn: uid= user_login ,cn=staged users,cn=accounts,cn=provisioning,dc= example ,dc=com changetype: add objectClass: top objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: posixaccount uid: user_login uidNumber: UID_number gidNumber: GID_number sn: surname givenName: first_name cn: full_name homeDirectory: /home/ user_login",
"ldapsearch -LLL -x -D \"uid= user_allowed_to_read ,cn=users,cn=accounts,dc=example, dc=com\" -w \" password \" -H ldap:// server.example.com -b \"cn=users, cn=accounts, dc=example, dc=com\" uid= user_login",
"dn: distinguished_name changetype: modify replace: attribute_to_modify attribute_to_modify: new_value",
"dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: TRUE",
"dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: FALSE",
"dn: distinguished_name changetype: modrdn newrdn: uid= user_login deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=example",
"dn: cn= group_distinguished_name ,cn=groups,cn=accounts,dc=example,dc=com changetype: add objectClass: top objectClass: ipaobject objectClass: ipausergroup objectClass: groupofnames objectClass: nestedgroup objectClass: posixgroup cn: group_name gidNumber: GID_number",
"ldapsearch -YGSSAPI -H ldap:// server.example.com -b \"cn=groups,cn=accounts,dc=example,dc=com\" \"cn= group_name \"",
"dn: group_distinguished_name changetype: delete",
"dn: group_distinguished_name changetype: modify add: member member: uid= user_login ,cn=users,cn=accounts,dc=example,dc=com",
"dn: distinguished_name changetype: modify delete: member member: uid= user_login ,cn=users,cn=accounts,dc=example,dc=com",
"ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: admin@EXAMPLE SASL SSF: 56 SASL data security layer installed. dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example changetype: add objectClass: top objectClass: inetorgperson cn: Stage sn: User adding new entry \"uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example\"",
"ipa stageuser-show stageuser --all --raw dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example uid: stageuser sn: User cn: Stage has_password: FALSE has_keytab: FALSE nsaccountlock: TRUE objectClass: top objectClass: inetorgperson objectClass: organizationalPerson objectClass: person",
"ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: admin@EXAMPLE SASL SSF: 56 SASL data security layer installed. dn: uid=user1,cn=users,cn=accounts,dc=example changetype: modrdn newrdn: uid=user1 deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=example modifying rdn of entry \"uid=user1,cn=users,cn=accounts,dc=example\"",
"ipa user-find --preserved=true --------------- 1 user matched --------------- User login: user1 First name: first_name Last name: last_name ---------------------------- Number of entries returned 1 ----------------------------",
"ipa host-add client1.example.com",
"ipa host-add --force --ip-address=192.168.166.31 client1.example.com",
"ipa host-add --force client1.example.com",
"ipa host-del --updatedns client1.example.com",
"[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa host-disable server.example.com",
"ipa-getkeytab -s server.example.com -p host/client.example.com -k /etc/krb5.keytab -D \"cn=directory manager\" -w password",
"host.example.com,1.2.3.4 ssh-rsa AAA...ZZZ==",
"\"ssh-rsa ABCD1234...== ipaclient.example.com\"",
"ssh-rsa AAA...ZZZ== host.example.com,1.2.3.4",
"server.example.com,1.2.3.4 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEApvjBvSFSkTU0WQW4eOweeo0DZZ08F9Ud21xlLy6FOhzwpXFGIyxvXZ52+siHBHbbqGL5+14N7UvElruyslIHx9LYUR/pPKSMXCGyboLy5aTNl5OQ5EHwrhVnFDIKXkvp45945R7SKYCUtRumm0Iw6wq0XD4o+ILeVbV3wmcB1bXs36ZvC/M6riefn9PcJmh6vNCvIsbMY6S+FhkWUTTiOXJjUDYRLlwM273FfWhzHK+SSQXeBp/zIn1gFvJhSZMRi9HZpDoqxLbBB9QIdIw6U4MIjNmKsSI/ASpkFm2GuQ7ZK9KuMItY2AoCuIRmRAdF8iYNHBTXNfFurGogXwRDjQ==",
"[jsmith@server ~]USD ssh-keygen -t rsa -C \"server.example.com,1.2.3.4\" Generating public/private rsa key pair. Enter file in which to save the key (/home/jsmith/.ssh/id_rsa): /home/jsmith/.ssh/host_keys Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/jsmith/.ssh/host_keys. Your public key has been saved in /home/jsmith/.ssh/host_keys.pub. The key fingerprint is: SHA256:GAUIDVVEgly7rs1lTWP6oguHz8BKvyZkpqCqVSsmi7c server.example.com The key's randomart image is: +--[ RSA 2048]----+ | .. | | .+| | o .* | | o . .. *| | S + . o+| | E . .. .| | . = . o | | o . ..o| | .....| +-----------------+",
"[jsmith@server ~]USD cat /home/jsmith/.ssh/host_keys.pub ssh-rsa AAAAB3NzaC1yc2E...tJG1PK2Mq++wQ== server.example.com,1.2.3.4",
"[jsmith@server ~]USD ipa host-mod --sshpubkey=\"ssh-rsa RjlzYQo==\" --updatedns host1.example.com",
"--sshpubkey=\"RjlzYQo==\" --sshpubkey=\"ZEt0TAo==\"",
"[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa host-mod --sshpubkey= --updatedns host1.example.com",
"cn=server,ou=ethers,dc=example,dc=com",
"[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa host-mod --macaddress=12:34:56:78:9A:BC server.example.com",
"ethers: ldap",
"getent ethers server.example.com",
"ipa group-show group_A Member users: user_1 Member groups: group_B Indirect Member users: user_2",
"ipa group-find --private ---------------- 2 groups matched ---------------- Group name: user1 Description: User private group for user1 GID: 830400006 Group name: user2 Description: User private group for user2 GID: 830400004 ---------------------------- Number of entries returned 2 ----------------------------",
"kinit admin",
"ipa group-add group_name ----------------------- Added group \"group_name\" ------------------------",
"kinit admin",
"ipa group-del group_name -------------------------- Deleted group \"group_name\" --------------------------",
"sss_cache -n host_group_name",
"ipa group-add-member group_name --users= user1 --users= user2 --groups= group1",
"ipa group-add-member group_name --external=' AD_DOMAIN \\ ad_user ' ipa group-add-member group_name --external=' ad_user @ AD_DOMAIN ' ipa group-add-member group_name --external=' ad_user @ AD_DOMAIN.EXAMPLE.COM '",
"ipa group-remove-member group_name --users= user1 --users= user2 --groups= group1",
"kinit admin",
"ipa-managed-entries --list",
"ipa-managed-entries -e \"UPG Definition\" disable Disabling Plugin",
"systemctl restart dirsrv.target",
"ipa config-mod --usersearch=\"uid,givenname,sn,telephonenumber,ou,title\" ipa config-mod --groupsearch=\"cn,description\"",
"ipa automember-add Automember Rule: user_group Grouping Type: group -------------------------------- Added automember rule \"user_group\" -------------------------------- Automember Rule: user_group",
"ipa automember-add-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ---------------------------------- Added condition(s) to \"user_group\" ---------------------------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-add Automember Rule: all_hosts Grouping Type: hostgroup ------------------------------------- Added automember rule \"all_hosts\" ------------------------------------- Automember Rule: all_hosts",
"ipa automember-add-condition Automember Rule: all_hosts Attribute Key: fqdn Grouping Type: hostgroup [Inclusive Regex]: .* [Exclusive Regex]: --------------------------------- Added condition(s) to \"all_hosts\" --------------------------------- Automember Rule: all_hosts Inclusive Regex: fqdn=.* ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-add Automember Rule: ad_users Grouping Type: group ------------------------------------- Added automember rule \"ad_users\" ------------------------------------- Automember Rule: ad_users",
"ipa automember-add-condition Automember Rule: ad_users Attribute Key: objectclass Grouping Type: group [Inclusive Regex]: ntUser [Exclusive Regex]: ------------------------------------- Added condition(s) to \"ad_users\" ------------------------------------- Automember Rule: ad_users Inclusive Regex: objectclass=ntUser ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-rebuild --type=group -------------------------------------------------------- Automember rebuild task finished. Processed (9) entries. --------------------------------------------------------",
"ipa automember-rebuild --users= user1 --users= user2 -------------------------------------------------------- Automember rebuild task finished. Processed (2) entries. --------------------------------------------------------",
"ipa automember-default-group-set Default (fallback) Group: default_user_group Grouping Type: group --------------------------------------------------- Set default (fallback) group for automember \"default_user_group\" --------------------------------------------------- Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"ipa automember-default-group-show Grouping Type: group Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"ipa-replica-manage dnarange-show masterA.example.com: 1001-1500 masterB.example.com: 1501-2000 masterC.example.com: No range set ipa-replica-manage dnarange-show masterA.example.com masterA.example.com: 1001-1500",
"ipa-replica-manage dnanextrange-show masterA.example.com: 1001-1500 masterB.example.com: No on-deck range set masterC.example.com: No on-deck range set ipa-replica-manage dnanextrange-show masterA.example.com masterA.example.com: 1001-1500",
"ipa-replica-manage dnarange-set masterA.example.com 1250-1499",
"ipa-replica-manage dnanextrange-set masterB.example.com 1001-5000",
"sss_cache -u user",
"[bjensen@server ~]USD ipa config-mod --userobjectclasses= {top,person,organizationalperson,inetorgperson,inetuser,posixaccount,krbprincipalaux,krbticketpolicyaux,ipaobject,ipasshuser, employeeinfo }",
"set -o braceexpand",
"[bjensen@server ~]USD ipa config-mod --groupobjectclasses= {top,groupofnames,nestedgroup,ipausergroup,ipaobject,ipasshuser, employeegroup }",
"[bjensen@server ~]USD kinit admin [bjensen@server ~]USD ipa config-show --all dn: cn=ipaConfig,cn=etc,dc=example,dc=com Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain: example.com Search time limit: 2 Search size limit: 100 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Default group objectclasses: top, groupofnames, nestedgroup, ipausergroup, ipaobject Default user objectclasses: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser Password Expiration Notification (days): 4 Password plugin features: AllowNThash SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023 Default PAC types: MS-PAC, nfs:NONE cn: ipaConfig objectclass: nsContainer, top, ipaGuiConfig, ipaConfigObject",
"# ipa-getkeytab -s ipaserver.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts",
"ipa service-add serviceName/hostname",
"ipa service-add HTTP/server.example.com ------------------------------------------------------- Added service \"HTTP/[email protected]\" ------------------------------------------------------- Principal: HTTP/[email protected] Managed by: ipaserver.example.com",
"ipa-getkeytab -s server.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts",
"ipa-getkeytab -s kdc.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts",
"kinit admin",
"ipa dnsrecord-add idm.example.com cluster --a-rec={192.0.2.40,192.0.2.41} Record name: cluster A record: 192.0.2.40, 192.0.2.41",
"ipa host-add cluster.idm.example.com ------------------------------------ Added host \"cluster.idm.example.com\" ------------------------------------ Host name: cluster.idm.example.com Principal name: host/[email protected] Password: False Keytab: False Managed by: cluster.idm.example.com",
"ipa service-add HTTP/cluster.idm.example.com ------------------------------------------------------------ Added service \"HTTP/[email protected]\" ------------------------------------------------------------ Principal: HTTP/[email protected] Managed by: cluster.idm.example.com",
"ipa service-allow-retrieve-keytab HTTP/cluster.idm.example.com --hosts={node01.idm.example.com,node02.idm.example.com} Principal: HTTP/[email protected] Managed by: cluster.idm.example.com Hosts allowed to retrieve keytab: node01.idm.example.com, node02.idm.example.com ------------------------- Number of members added 2 -------------------------",
"ipa service-allow-create-keytab HTTP/cluster.idm.example.com --hosts=node01.idm.example.com Principal: HTTP/[email protected] Managed by: cluster.idm.example.com Hosts allowed to retrieve keytab: node01.idm.example.com, node02.idm.example.com Hosts allowed to create keytab: node01.idm.example.com ------------------------- Number of members added 1 -------------------------",
"kinit -kt /etc/krb5.keytab",
"ipa-getkeytab -s ipaserver.idm.example.com -p HTTP/cluster.idm.example.com -k /tmp/client.keytab",
"ipa-getkeytab -r -s ipaserver.idm.example.com -p HTTP/cluster.idm.example.com -k /tmp/client.keytab",
"[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa service-disable HTTP/server.example.com",
"ipa-getkeytab -s ipaserver.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts",
"ipa service-add-host principal --hosts= hostname",
"ipa service-add HTTP/web.example.com ipa service-add-host HTTP/web.example.com --hosts=client1.example.com",
"kinit -kt /etc/krb5.keytab host/client1.example.com ipa-getkeytab -s server.example.com -k /tmp/test.keytab -p HTTP/web.example.com Keytab successfully retrieved and stored in: /tmp/test.keytab",
"kinit -kt /etc/krb5.keytab host/client1.example.com openssl req -newkey rsa:2048 -subj '/CN=web.example.com/O=EXAMPLE.COM' -keyout /etc/pki/tls/web.key -out /tmp/web.csr -nodes Generating a 2048 bit RSA private key .............................................................+++ ............................................................................................+++ Writing new private key to '/etc/pki/tls/private/web.key'",
"ipa cert-request --principal=HTTP/web.example.com web.csr Certificate: MIICETCCAXqgA...[snip] Subject: CN=web.example.com,O=EXAMPLE.COM Issuer: CN=EXAMPLE.COM Certificate Authority Not Before: Tue Feb 08 18:51:51 2011 UTC Not After: Mon Feb 08 18:51:51 2016 UTC Serial number: 1005",
"kinit admin",
"ipa host-add-managedby client2.example.com --hosts=client1.example.com",
"kinit -kt /etc/krb5.keytab host/client1.example.com",
"ipa-getkeytab -s server.example.com -k /tmp/client2.keytab -p host/client2.example.com Keytab successfully retrieved and stored in: /tmp/client2.keytab",
"kinit -kt /etc/krb5.keytab host/[email protected]",
"kinit -kt /etc/httpd/conf/krb5.keytab HTTP/[email protected]",
"ipa help idviews",
"ipa idview-add --help",
"kinit admin",
"ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1",
"ipa idoverrideuser-add example_for_host1 user --sshpubkey=\" ssh-rsa AAAAB3NzaC1yrRqFE...gWRL71/miPIZ [email protected] \" ----------------------------- Added User ID override \"user\" ----------------------------- Anchor to override: user SSH public key: ssh-rsa AAAB3NzaC1yrRqFE...gWRL71/miPIZ [email protected]",
"ipa idoverrideuser-add-cert example_for_host1 user --certificate=\"MIIEATCC...\"",
"ipa idview-apply example_for_host1 --hosts=host1.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ipa service-mod service/[email protected] --ok-as-delegate= 1",
"ipa service-mod test/[email protected] --requires-pre-auth= 0",
"kvno test/[email protected] klist -f Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 02/19/2014 09:59:02 02/20/2014 08:21:33 test/ipa/[email protected] Flags: FAT O",
"kadmin.local kadmin.local: getprinc test/ipa.example.com Principal: test/[email protected] Expiration date: [never] Attributes: REQUIRES_PRE_AUTH OK_AS_DELEGATE OK_TO_AUTH_AS_DELEGATE Policy: [none]",
"ipa user-add-principal user useralias -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], [email protected]",
"kinit -C useralias Password for [email protected]:",
"ipa user-remove-principal user useralias -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]",
"ipa user-show user User login: user Principal name: [email protected] ipa user-remove-principal user user ipa: ERROR: invalid 'krbprincipalname': at least one value equal to the canonical principal name must be present",
"ipa: ERROR: The realm for the principal does not match the realm for this IPA server",
"ipa user-add-principal user user\\\\@example.com -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], user\\@[email protected]",
"kinit -E [email protected] Password for user\\@[email protected]:",
"ipa user-remove-principal user user\\\\@example.com -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]",
"( host.example.com ,, nisdomain.example.com ) (-, user , nisdomain.example.com )",
"dn: ipaUniqueID=d4453480-cc53-11dd-ad8b-0800200c9a66,cn=ng,cn=alt, cn: netgroup1 memberHost: fqdn=host1.example.com,cn=computers,cn=accounts, memberHost: cn=VirtGuests,cn=hostgroups,cn=accounts, memberUser: cn=demo,cn=users,cn=accounts, memberUser: cn=Engineering,cn=groups,cn=accounts, nisDomainName: nisdomain.example.com",
"ipa netgroup-show netgroup1 Netgroup name: netgroup1 Description: my netgroup NIS domain name: nisdomain.example.com Member Host: VirtGuests Member Host: host1.example.com Member User: demo Member User: Engineering",
"ipa-nis-manage enable ipa-compat-manage enable",
"ldapmodify -x -D 'cn=directory manager' -W dn: cn=NIS Server,cn=plugins,cn=config changetype: modify add: nsslapd-pluginarg0 nsslapd-pluginarg0: 514",
"systemctl enable rpcbind.service systemctl start rpcbind.service",
"systemctl restart dirsrv.target",
"ipa netgroup-add --desc=\"Netgroup description\" --nisdomain=\"example.com\" example-netgroup",
"ipa netgroup-add-member --users= user_name --groups= group_name --hosts= host_name --hostgroups= host_group_name --netgroups= netgroup_name group_nameame",
"ipa netgroup-add-member --users={user1;user2,user3} --groups={group1,group2} example-group",
"ldapadd -h server.example.com -x -D \"cn=Directory Manager\" -W dn: nis-domain=example.com+nis-map=auto.example,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: example.com nis-map: auto.example nis-filter: (objectclass=automount) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} nis-base: automountmapname=auto.example,cn=default,cn=automount,dc=example,dc=com",
"ypcat -k -d example.com -h server.example.com auto.example",
"yum install yp-tools -y",
"#!/bin/sh USD1 is the NIS domain, USD2 is the NIS master server ypcat -d USD1 -h USD2 passwd > /dev/shm/nis-map.passwd 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.passwd) ; do IFS=' ' username=USD(echo USDline | cut -f1 -d:) # Not collecting encrypted password because we need cleartext password # to create kerberos key uid=USD(echo USDline | cut -f3 -d:) gid=USD(echo USDline | cut -f4 -d:) gecos=USD(echo USDline | cut -f5 -d:) homedir=USD(echo USDline | cut -f6 -d:) shell=USD(echo USDline | cut -f7 -d:) # Now create this entry echo passw0rd1 | ipa user-add USDusername --first=NIS --last=USER --password --gidnumber=USDgid --uid=USDuid --gecos=\"USDgecos\" --homedir=USDhomedir --shell=USDshell ipa user-show USDusername done",
"kinit admin",
"sh /root/nis-users.sh nisdomain nis-master.example.com",
"#!/bin/sh USD1 is the NIS domain, USD2 is the NIS master server ypcat -d USD1 -h USD2 group > /dev/shm/nis-map.group 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.group); do IFS=' ' groupname=USD(echo USDline | cut -f1 -d:) # Not collecting encrypted password because we need cleartext password # to create kerberos key gid=USD(echo USDline | cut -f3 -d:) members=USD(echo USDline | cut -f4 -d:) # Now create this entry ipa group-add USDgroupname --desc=NIS_GROUP_USDgroupname --gid=USDgid if [ -n \"USDmembers\" ]; then ipa group-add-member USDgroupname --users={USDmembers} fi ipa group-show USDgroupname done",
"kinit admin",
"sh /root/nis-groups.sh nisdomain nis-master.example.com",
"#!/bin/sh USD1 is the NIS domain, USD2 is the NIS master server ypcat -d USD1 -h USD2 hosts | egrep -v \"localhost|127.0.0.1\" > /dev/shm/nis-map.hosts 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.hosts); do IFS=' ' ipaddress=USD(echo USDline | awk '{print USD1}') hostname=USD(echo USDline | awk '{print USD2}') master=USD(ipa env xmlrpc_uri | tr -d '[:space:]' | cut -f3 -d: | cut -f3 -d/) domain=USD(ipa env domain | tr -d '[:space:]' | cut -f2 -d:) if [ USD(echo USDhostname | grep \"\\.\" |wc -l) -eq 0 ] ; then hostname=USD(echo USDhostname.USDdomain) fi zone=USD(echo USDhostname | cut -f2- -d.) if [ USD(ipa dnszone-show USDzone 2>/dev/null | wc -l) -eq 0 ] ; then ipa dnszone-add --name-server=USDmaster --admin-email=root.USDmaster fi ptrzone=USD(echo USDipaddress | awk -F. '{print USD3 \".\" USD2 \".\" USD1 \".in-addr.arpa.\"}') if [ USD(ipa dnszone-show USDptrzone 2>/dev/null | wc -l) -eq 0 ] ; then ipa dnszone-add USDptrzone --name-server=USDmaster --admin-email=root.USDmaster fi # Now create this entry ipa host-add USDhostname --ip-address=USDipaddress ipa host-show USDhostname done",
"kinit admin",
"sh /root/nis-hosts.sh nisdomain nis-master.example.com",
"#!/bin/sh USD1 is the NIS domain, USD2 is the NIS master server ypcat -k -d USD1 -h USD2 netgroup > /dev/shm/nis-map.netgroup 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.netgroup); do IFS=' ' netgroupname=USD(echo USDline | awk '{print USD1}') triples=USD(echo USDline | sed \"s/^USDnetgroupname //\") echo \"ipa netgroup-add USDnetgroupname --desc=NIS_NG_USDnetgroupname\" if [ USD(echo USDline | grep \"(,\" | wc -l) -gt 0 ]; then echo \"ipa netgroup-mod USDnetgroupname --hostcat=all\" fi if [ USD(echo USDline | grep \",,\" | wc -l) -gt 0 ]; then echo \"ipa netgroup-mod USDnetgroupname --usercat=all\" fi for triple in USDtriples; do triple=USD(echo USDtriple | sed -e 's/-//g' -e 's/(//' -e 's/)//') if [ USD(echo USDtriple | grep \",.*,\" | wc -l) -gt 0 ]; then hostname=USD(echo USDtriple | cut -f1 -d,) username=USD(echo USDtriple | cut -f2 -d,) domain=USD(echo USDtriple | cut -f3 -d,) hosts=\"\"; users=\"\"; doms=\"\"; [ -n \"USDhostname\" ] && hosts=\"--hosts=USDhostname\" [ -n \"USDusername\" ] && users=\"--users=USDusername\" [ -n \"USDdomain\" ] && doms=\"--nisdomain=USDdomain\" echo \"ipa netgroup-add-member USDnetgroup USDhosts USDusers USDdoms\" else netgroup=USDtriple echo \"ipa netgroup-add USDnetgroup --desc=NIS_NG_USDnetgroup\" fi done done",
"kinit admin",
"sh /root/nis-netgroups.sh nisdomain nis-master.example.com",
"#!/bin/sh USD1 is for the automount entry in ipa ipa automountlocation-add USD1 USD2 is the NIS domain, USD3 is the NIS master server, USD4 is the map name ypcat -k -d USD2 -h USD3 USD4 > /dev/shm/nis-map.USD4 2>&1 ipa automountmap-add USD1 USD4 basedn=USD(ipa env basedn | tr -d '[:space:]' | cut -f2 -d:) cat > /tmp/amap.ldif <<EOF dn: nis-domain=USD2+nis-map=USD4,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: USD2 nis-map: USD4 nis-base: automountmapname=USD4,cn=USD1,cn=automount,USDbasedn nis-filter: (objectclass=*) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} EOF ldapadd -x -h USD3 -D \"cn=Directory Manager\" -W -f /tmp/amap.ldif IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.USD4); do IFS=\" \" key=USD(echo \"USDline\" | awk '{print USD1}') info=USD(echo \"USDline\" | sed -e \"s#^USDkey[ \\t]*##\") ipa automountkey-add nis USD4 --key=\"USDkey\" --info=\"USDinfo\" done",
"kinit admin",
"sh /root/nis-automounts.sh location nisdomain nis-master.example.com map_name",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h ipaserver.example.com -x dn: cn=config changetype: modify replace: passwordStorageScheme passwordStorageScheme: crypt",
"ipa user-mod user --password Password: Enter Password again to verify: -------------------- Modified user \"user\" --------------------",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h ldap.example.com -p 389 dn: cn=ipa_pwd_extop,cn=plugins,cn=config changetype: modify add: passSyncManagersDNs passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ipa user-unlock user ----------------------- Unlocked account \"user\" -----------------------",
"ipa user-status user ----------------------- Account disabled: False ----------------------- Server: example.com Failed logins: 8 Last successful authentication: 20160229080309Z Last failed authentication: 20160229080317Z Time now: 2016-02-29T08:04:46Z ---------------------------- Number of entries returned 1 ----------------------------",
"ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success",
"ipa config-mod --ipaconfigstring='AllowNThash'",
"ipa config-mod --ipaconfigstring='AllowNThash' --ipaconfigstring='KDC:Disable Lockout'",
"ipactl restart",
"First Factor: Second Factor (optional):",
"First factor: static_password Second factor: one-time_password",
"First factor: static_password Second factor: one-time_password",
"[Service] Environment=OPENSSL_FIPS_NON_APPROVED_MD5_ALLOW=1",
"systemctl daemon-reload",
"systemctl start radiusd",
"ipa config-mod --user-auth-type=otp",
"ipa config-mod --user-auth-type=otp --user-auth-type=disabled",
"ipa user-mod user --user-auth-type=otp",
"ipa config-mod --user-auth-type=otp --user-auth-type=password",
"ipa otptoken-add ------------------ Added OTP token \"\" ------------------ Unique ID: 7060091b-4e40-47fd-8354-cb32fecd548a Type: TOTP",
"ipa otptoken-add-yubikey --slot=2",
"ipa otptoken-add --owner=user ------------------ Added OTP token \"\" ------------------ Unique ID: 5303baa8-08f9-464e-a74d-3b38de1c041d Type: TOTP",
"ipa otptoken-add-yubikey --owner=user",
"[otp] DEFAULT = { timeout = 120 }",
"systemctl restart krb5kdc",
"ipa user-mod --user-auth-type=password --user-auth-type=otp user_name",
"ipa otptoken-add --desc=\" New Token \"",
"ipa otptoken-find -------------------- 2 OTP tokens matched -------------------- Unique ID: 4ce8ec29-0bf7-4100-ab6d-5d26697f0d8f Type: TOTP Description: New Token Owner: user Unique ID: e1e9e1ef-172c-4fa9-b637-6b017ce79315 Type: TOTP Description: Old Token Owner: user ---------------------------- Number of entries returned 2 ----------------------------",
"# ipa otptoken-del e1e9e1ef-172c-4fa9-b637-6b017ce79315 -------------------------------------------------------- Deleted OTP token \" e1e9e1ef-172c-4fa9-b637-6b017ce79315 \" --------------------------------------------------------",
"ipa user-mod --user-auth-type=otp user_name",
"ipa host-mod server.example.com --auth-ind=otp --------------------------------------------------------- Modified host \"server.example.com\" --------------------------------------------------------- Host name: server.example.com Authentication Indicators: otp",
"pkinit_indicator = indicator",
"systemctl restart krb5kdc",
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMM4xPu54Kf2dx7C4Ta2F7vnIzuL1i6P21TTKniSkjFuA+r qW06588e7v14Im4VejwnNk352gp49A62qSVOzp8IKA9xdtyRmHYCTUvmkcyspZvFRI713zfRKQVFyJOqHmW/m dCmak7QBxYou2ELSPhH3pe8MYTQIulKDSu5Zbsrqedg1VGkSJxf7mDnCSPNWWzAY9AFB9Lmd2m2xZmNgVAQEQ nZXNMaIlroLD/51rmMSkJGHGb1O68kEq9Z client.example.com",
"ssh-keygen -t rsa -C [email protected] Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Created directory '/home/user/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: SHA256:GAUIDVVEgly7rs1lTWP6oguHz8BKvyZkpqCqVSsmi7c [email protected] The key's randomart image is: +--[ RSA 2048]----+ | | | + . | | + = . | | = + | | . E S.. | | . . .o | | . . . oo. | | . o . +.+o | | o .o..o+o | +-----------------+",
"ipa user-mod user --sshpubkey=\" ssh-rsa AAAAB3Nza...SNc5dv== client.example.com \"",
"--sshpubkey=\"AAAAB3Nza...SNc5dv==\" --sshpubkey=\"RjlzYQo...ZEt0TAo=\"",
"ipa user-mod user --sshpubkey=\"USD(cat ~/.ssh/id_rsa.pub)\" --sshpubkey=\"USD(cat ~/.ssh/id_rsa2.pub)\"",
"ipa user-mod user --sshpubkey=",
"ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h GlobalKnownHostsFile /var/lib/sss/pubconf/known_hosts",
"AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys AuthorizedKeysCommandUser user",
"certutil -L -d /etc/pki/nssdb/ -h all Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI my_certificate CT,C,C",
"certutil -L -d /etc/pki/nssdb/ -n ' my_certificate ' -r | base64 -w 0 > user.crt",
"ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'",
"openssl x509 -noout -issuer -in idm_user.crt -nameopt RFC2253 issuer=CN=Certificate Authority,O=REALM.EXAMPLE.COM",
"# openssl x509 -noout -issuer -in ad_user.crt -nameopt RFC2253 issuer=CN=AD-WIN2012R2-CA,DC=AD,DC=EXAMPLE,DC=COM",
"ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN= AD-WIN2012R2-CA,DC=AD,DC=EXAMPLE,DC=COM ' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'",
"(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})",
"<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG",
"systemctl restart sssd",
"kinit admin",
"ipa certmaprule-add rule_name --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})' ------------------------------------------------------- Added Certificate Identity Mapping Rule \"rule_name\" ------------------------------------------------------- Rule name: rule_name Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500}) Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG Enabled: TRUE",
"systemctl restart sssd",
"[root@server ~]# cat idm_user_certificate.pem -----BEGIN CERTIFICATE----- MIIFFTCCA/2gAwIBAgIBEjANBgkqhkiG9w0BAQsFADA6MRgwFgYDVQQKDA9JRE0u RVhBTVBMRS5DT00xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0x ODA5MDIxODE1MzlaFw0yMDA5MDIxODE1MzlaMCwxGDAWBgNVBAoMD0lETS5FWEFN [...output truncated...]",
"sss_cache -u user_name",
"ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------",
"kinit admin",
"CERT=`cat idm_user_cert.pem | tail -n +2 | head -n -1 | tr -d '\\r\\n'\\` ipa user-add-certmapdata idm_user --certificate USDCERT",
"ipa user-add-certmapdata idm_user --subject \" O=EXAMPLE.ORG,CN=test \" --issuer \" CN=Smart Card CA,O=EXAMPLE.ORG \" -------------------------------------------- Added certificate mappings to user \" idm_user \" -------------------------------------------- User login: idm_user Certificate mapping data: X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG",
"sss_cache -u user_name",
"ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------",
"(userCertificate;binary={cert!bin})",
"<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com",
"systemctl restart sssd",
"kinit admin",
"ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE",
"systemctl restart sssd",
"(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})",
"<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com",
"ad.example.com",
"systemctl restart sssd",
"kinit admin",
"ipa certmaprule-add ad_configured_for_mapping_rule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})' --domain=ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"ad_configured_for_mapping_rule\" ------------------------------------------------------- Rule name: ad_configured_for_mapping_rule Mapping rule: (altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE",
"systemctl restart sssd",
"ldapsearch -o ldif-wrap=no -LLL -h adserver.ad.example.com -p 389 -D cn=Administrator,cn=users,dc=ad,dc=example,dc=com -W -b cn=users,dc=ad,dc=example,dc=com \"(cn=ad_user)\" altSecurityIdentities Enter LDAP Password: dn: CN=ad_user,CN=Users,DC=ad,DC=example,DC=com altSecurityIdentities: X509:<I>DC=com,DC=example,DC=ad,CN=AD-ROOT-CA<S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user",
"(userCertificate;binary={cert!bin})",
"<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com",
"systemctl restart sssd",
"kinit admin",
"ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE",
"systemctl restart sssd",
"sss_cache -u [email protected]",
"ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------",
"kinit admin",
"CERT=`cat ad_user_cert.pem | tail -n +2 | head -n -1 | tr -d '\\r\\n'\\` ipa idoverrideuser-add-cert [email protected] --certificate USDCERT",
"sss_cache -u [email protected]",
"ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------",
"ipa certmaprule-add ad_cert_for_ipa_and_ad_users \\ --maprule='(|(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' \\ --matchrule='<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' \\ --domain=ad.example.com",
"ipa certmaprule-add ipa_cert_for_ad_users --maprule='(|(userCertificate;binary={cert!bin})(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' --matchrule='<ISSUER>CN=Certificate Authority,O=REALM.EXAMPLE.COM' --domain=idm.example.com --domain=ad.example.com",
"ipa-advise config-client-for-smart-card-auth > client_smart_card_script.sh",
"chmod +x client_smart_card_script.sh",
"./client_smart_card_script.sh CA_cert.pem",
"ipa-cacert-manage -n \"SmartCard CA\" -t CT,C,C install ca.pem ipa-certupdate",
"systemctl restart httpd",
"client login: idm_user PIN for PIV Card Holder pin (PIV_II) for user [email protected]:",
"ssh -I /usr/lib64/opensc-pkcs11.so -l idm_user server.idm.example.com Enter PIN for 'PIV_II (PIV Card Holder pin)': Last login: Thu Apr 6 12:49:32 2017 from 10.36.116.42",
"ssh -I /usr/lib64/opensc-pkcs11.so -l [email protected] server.idm.example.com Enter PIN for 'PIV_II (PIV Card Holder pin)': Last login: Thu Apr 6 12:49:32 2017 from 10.36.116.42",
"id uid=1928200001(idm_user) gid=1928200001(idm_user) groups=1928200001(idm_user) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
"id uid=1171201116([email protected]) gid=1171201116([email protected]) groups=1171201116([email protected]),1171200513(domain [email protected]) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
"kinit admin Password for [email protected]:",
"ipa certmapconfig-mod --promptusername=TRUE Prompt for the username: TRUE",
"ipa-advise config-client-for-smart-card-auth > client_smart_card_script.sh",
"chmod +x client_smart_card_script.sh",
"./client_smart_card_script.sh CA_cert.pem",
"ipa-cacert-manage -n \"SmartCard CA\" -t CT,C,C install ca.pem ipa-certupdate",
"systemctl restart httpd",
"kinit -X X509_user_identity='PKCS11:opensc-pkcs11.so' idm_user",
"[libdefaults] [... file truncated ...] pkinit_eku_checking = kpServerAuth pkinit_kdc_hostname = adserver.ad.domain.com",
"Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Kdc] \"UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors\"=dword:00000001 [HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\LSA\\Kerberos\\Parameters] \"UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors\"=dword:00000001",
"kinit -X X509_user_identity='PKCS11:opensc-pkcs11.so' [email protected]",
"ipa-advise config-server-for-smart-card-auth > server_smart_card_script.sh",
"chmod +x server_smart_card_script.sh",
"ipa-cacert-manage -n \"SmartCard CA\" -t CT,C,C install ca.pem ipa-certupdate",
"systemctl restart httpd systemctl restart krb5kdc",
"NSSRenegotiation NSSRequireSafeNegotiation on",
"#! /usr/bin/env python def application(environ, start_response): status = '200 OK' response_body = \"\"\" <!DOCTYPE html> <html> <head> <title>Login</title> </head> <body> <form action='/app' method='get'> Username: <input type='text' name='username'> <input type='submit' value='Login with certificate'> </form> </body> </html> \"\"\" response_headers = [ ('Content-Type', 'text/html'), ('Content-Length', str(len(response_body))) ] start_response(status, response_headers) return [response_body]",
"#! /usr/bin/env python def application(environ, start_response): try: user = environ['REMOTE_USER'] except KeyError: status = '400 Bad Request' response_body = 'Login failed.\\n' else: status = '200 OK' response_body = 'Login succeeded. Username: {}\\n'.format(user) response_headers = [ ('Content-Type', 'text/plain'), ('Content-Length', str(len(response_body))) ] start_response(status, response_headers) return [response_body]",
"<IfModule !lookup_identity_module> LoadModule lookup_identity_module modules/mod_lookup_identity.so </IfModule> WSGIScriptAlias /login /var/www/app/login.py WSGIScriptAlias /app /var/www/app/protected.py <Location \"/app\"> NSSVerifyClient require NSSUserName SSL_CLIENT_CERT LookupUserByCertificate On LookupUserByCertificateParamName \"username\" </Location>",
"ipa host-mod host_name --auth-ind= indicator",
"ipa service-mod service / host_name --auth-ind= indicator",
"ipa host-mod host.idm.example.com --auth-ind=pkinit",
"mkdir ~/certdb/",
"certutil -N -d ~/certdb/",
"certutil -R -d ~/certdb/ -a -g 4096 -s \" CN=server.example.com,O=EXAMPLE.COM \" -8 server.example.com > certificate_request.csr",
"otherName= 1.3.6.1.4.1.311.20.2.3 ;UTF8: test2/[email protected] DNS.1 = server.example.com",
"openssl req -new -newkey rsa: 2048 -keyout test2service.key -sha256 -nodes -out certificate_request.csr -config openssl.conf",
"ipa cert-request certificate_request.csr --principal= host/server.example.com",
"ipa cert-revoke 1032 --revocation-reason=1",
"ipa cert-remove-hold 1032",
"ipa user-add-cert user --certificate= MIQTPrajQAwg",
"ipa user-add-cert user --certificate=\"USD(openssl x509 -outform der -in user_cert.pem | base64 -w 0)\"",
"ipa cert-find ----------------------- 10 certificates matched ----------------------- Serial number (hex): 0x1 Serial number: 1 Status: VALID Subject: CN=Certificate Authority,O=EXAMPLE.COM ----------------------------- Number of entries returned 10 -----------------------------",
"ipa cert-find --issuedon-from=2020-01-07 --issuedon-to=2020-02-07",
"ipa cert-show 132 Serial number: 132 Certificate: MIIDtzCCAp+gAwIBAgIBATANBgkqhkiG9w0BAQsFADBBMR8wHQYDVQQKExZMQUIu LxIQjrEFtJmoBGb/TWRlwGEWy1ayr4iTEf1ayZ+RGNylLalEAtk9RLjEjg== Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Sun Jun 08 05:51:11 2014 UTC Not After: Thu Jun 08 05:51:11 2034 UTC Serial number (hex): 0x132 Serial number: 132",
"ipa user-show user User login: user Certificate: MIICfzCCAWcCAQA",
"ipa cert-show certificate_serial_number --out= path_to_file",
"ipa certprofile Manage Certificate Profiles EXAMPLES: Import a profile that will not store issued certificates: ipa certprofile-import ShortLivedUserCert --file UserCert.profile --desc \"User Certificates\" --store=false Delete a certificate profile: ipa certprofile-del ShortLivedUserCert",
"ipa certprofile-mod --help Usage: ipa [global-options] certprofile-mod ID [options] Modify Certificate Profile configuration. Options: -h, --help show this help message and exit --desc=STR Brief description of this profile --store=BOOL Whether to store certs issued using this profile",
"ipa certprofile-import Profile ID: smime Profile description: S/MIME certificates Store issued certificates [True]: TRUE Filename of a raw profile. The XML format is not supported.: smime.cfg ------------------------ Imported profile \"smime\" ------------------------ Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE",
"ipa certprofile-import --file= smime.cfg",
"ipa certprofile-show caIPAserviceCert --out= file_name",
"ipa certprofile-find ------------------ 3 profiles matched ------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE Profile ID: IECUserRoles",
"ipa certprofile-show profile_ID Profile ID: profile_ID Profile description: S/MIME certificates Store issued certificates: TRUE",
"ipa certprofile-mod profile_ID --desc=\"New description\" --store=False ------------------------------------ Modified Certificate Profile \"profile_ID\" ------------------------------------ Profile ID: profile_ID Profile description: New description Store issued certificates: FALSE",
"ipa certprofile-mod profile_ID --file= new_configuration.cfg",
"ipa certprofile-del profile_ID ----------------------- Deleted profile \"profile_ID\" -----------------------",
"ipa caacl Manage CA ACL rules. EXAMPLES: Create a CA ACL \"test\" that grants all users access to the \"UserCert\" profile: ipa caacl-add test --usercat=all ipa caacl-add-profile test --certprofiles UserCert Display the properties of a named CA ACL: ipa caacl-show test Create a CA ACL to let user \"alice\" use the \"DNP3\" profile on \"DNP3-CA\": ipa caacl-add alice_dnp3 ipa caacl-add-ca alice_dnp3 --cas DNP3-CA ipa caacl-add-profile alice_dnp3 --certprofiles DNP3 ipa caacl-add-user alice_dnp3 --user=alice",
"ipa caacl-mod --help Usage: ipa [global-options] caacl-mod NAME [options] Modify a CA ACL. Options: -h, --help show this help message and exit --desc=STR Description --cacat=['all'] CA category the ACL applies to --profilecat=['all'] Profile category the ACL applies to",
"ipa caacl-add ACL name: smime_acl ------------------------ Added CA ACL \"smime_acl\" ------------------------ ACL name: smime_acl Enabled: TRUE",
"ipa caacl-add ca_acl_name --usercat=all",
"ipa cert-request CSR-FILE --principal user --profile-id profile_id ipa: ERROR Insufficient access: Principal 'user' is not permitted to use CA '.' with profile 'profile_id' for certificate issuance.",
"ipa caacl-find ----------------- 2 CA ACLs matched ----------------- ACL name: hosts_services_caIPAserviceCert Enabled: TRUE",
"ipa caacl-show ca_acl_name ACL name: ca_acl_name Enabled: TRUE Host category: all",
"ipa caacl-mod ca_acl_name --desc=\"New description\" --profilecat=all --------------------------- Modified CA ACL \"ca_acl_name\" --------------------------- ACL name: smime_acl Description: New description Enabled: TRUE Profile category: all",
"ipa caacl-disable ca_acl_name --------------------------- Disabled CA ACL \"ca_acl_name\" ---------------------------",
"ipa caacl-enable ca_acl_name --------------------------- Enabled CA ACL \"ca_acl_name\" ---------------------------",
"ipa caacl-del ca_acl_name",
"ipa caacl-add-user ca_acl_name --groups= group_name",
"ipa caacl-add-user ca_acl_name --users= user_name ipa: ERROR: users cannot be added when user category='all'",
"ipa cert-request CSR-FILE --principal user --profile-id profile_id ipa: ERROR Insufficient access: Principal 'user' is not permitted to use CA '.' with profile 'profile_id' for certificate issuance.",
"ipa caacl-add-user --help",
"ipa certprofile-import certificate_profile --file= certificate_profile.cfg --store=True",
"ipa caacl-add users_certificate_profile --usercat=all",
"ipa caacl-add-profile users_certificate_profile --certprofiles= certificate_profile",
"openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout private.key -out cert.csr -subj '/CN= user '",
"ipa cert-request cert.csr --principal= user --profile-id= certificate_profile",
"ipa user-show user User login: user Certificate: MIICfzCCAWcCAQA",
"ipa certprofile-import certificate_profile --file= certificate_profile.txt --store=True",
"ipa vault-show my_vault Vault name: my_vault Type: standard Owner users: user Vault user: user",
"ipa-kra-install",
"ipa help vault",
"ipa vault-add --help",
"ipa vault-show user_vault --user user",
"[admin@server ~]USD ipa vault-show user_vault ipa: ERROR: user_vault: vault not found",
"kinit user",
"ipa vault-add my_vault --type standard ---------------------- Added vault \"my_vault\" ---------------------- Vault name: my_vault Type: standard Owner users: user Vault user: user",
"ipa vault-archive my_vault --in secret.txt ----------------------------------- Archived data into vault \"my_vault\" -----------------------------------",
"kinit user",
"ipa vault-retrieve my_vault --out secret_exported.txt -------------------------------------- Retrieved data from vault \"my_vault\" --------------------------------------",
"kinit admin",
"ipa vault-add http_password --type standard --------------------------- Added vault \"http_password\" --------------------------- Vault name: http_password Type: standard Owner users: admin Vault user: admin",
"ipa vault-archive http_password --in password.txt ---------------------------------------- Archived data into vault \"http_password\" ----------------------------------------",
"kinit admin",
"openssl genrsa -out service-private.pem 2048 Generating RSA private key, 2048 bit long modulus .+++ ...........................................+++ e is 65537 (0x10001)",
"openssl rsa -in service-private.pem -out service-public.pem -pubout writing RSA key",
"ipa vault-add password_vault --service HTTP/server.example.com --type asymmetric --public-key-file service-public.pem ---------------------------- Added vault \"password_vault\" ---------------------------- Vault name: password_vault Type: asymmetric Public key: LS0tLS1C...S0tLS0tCg== Owner users: admin Vault service: HTTP/[email protected]",
"ipa vault-retrieve http_password --out password.txt ----------------------------------------- Retrieved data from vault \"http_password\" -----------------------------------------",
"ipa vault-archive password_vault --service HTTP/server.example.com --in password.txt ----------------------------------- Archived data into vault \"password_vault\" -----------------------------------",
"kinit admin",
"kinit HTTP/server.example.com -k -t /etc/httpd/conf/ipa.keytab",
"ipa vault-retrieve password_vault --service HTTP/server.example.com --private-key-file service-private.pem --out password.txt ------------------------------------ Retrieved data from vault \"password_vault\" ------------------------------------",
"ipa vault-archive http_password --in new_password.txt ---------------------------------------- Archived data into vault \"http_password\" ----------------------------------------",
"ipa vault-retrieve http_password --out password.txt ----------------------------------------- Retrieved data from vault \"http_password\" -----------------------------------------",
"ipa vault-archive password_vault --service HTTP/server.example.com --in password.txt ----------------------------------- Archived data into vault \"password_vault\" -----------------------------------",
"kinit admin",
"ipa vault-add shared_vault --shared --type standard --------------------------- Added vault \"shared_vault\" --------------------------- Vault name: shared_vault Type: standard Owner users: admin Shared vault: True",
"ipa vault-archive shared_vault --shared --in secret.txt ----------------------------------- Archived data into vault \"shared_vault\" -----------------------------------",
"ipa vault-add-member shared_vault --shared --users={user1,user2} Vault name: shared_vault Type: standard Owner users: admin Shared vault: True Member users: user1, user2 ------------------------- Number of members added 2 -------------------------",
"kinit user1",
"ipa vault-retrieve shared_vault --shared --out secret_exported.txt ----------------------------------------- Retrieved data from vault \"shared_vault\" -----------------------------------------",
"ipa vault-mod --change-password Vault name: example_symmetric_vault Password: old_password New password: new_password Enter New password again to verify: new_password ----------------------- Modified vault \" example_symmetric_vault \" ----------------------- Vault name: example_symmetric_vault Type: symmetric Salt: dT+M+4ik/ltgnpstmCG1sw== Owner users: admin Vault user: admin",
"ipa vault-mod example_asymmetric_vault --private-key-file= old_private_key.pem --public-key-file= new_public_key.pem ------------------------------- Modified vault \" example_assymmetric_vault \" ------------------------------- Vault name: example_assymmetric_vault Typ: asymmetric Public key: Owner users: admin Vault user: admin",
"ipa ca-add vpn-ca --subject=\" CN=VPN,O=IDM.EXAMPLE.COM \" ------------------- Created CA \"vpn-ca\" ------------------- Name: vpn-ca Authority ID: ba83f324-5e50-4114-b109-acca05d6f1dc Subject DN: CN=VPN,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IDM.EXAMPLE.COM",
"certutil -d /etc/pki/pki-tomcat/alias/ -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI caSigningCert cert-pki-ca CTu,Cu,Cu Server-Cert cert-pki-ca u,u,u auditSigningCert cert-pki-ca u,u,Pu caSigningCert cert-pki-ca ba83f324-5e50-4114-b109-acca05d6f1dc u,u,u ocspSigningCert cert-pki-ca u,u,u subsystemCert cert-pki-ca u,u,u",
"ipa ca-del vpn-ca ------------------- Deleted CA \"vpn-ca\" -------------------",
"ipa-certupdate trying https://idmserver.idm.example.com/ipa/json Forwarding 'schema' to json server 'https://idmserver.idm.example.com/ipa/json' trying https://idmserver.idm.example.com/ipa/json Forwarding 'ca_is_enabled' to json server 'https://idmserver.idm.example.com/ipa/json' Forwarding 'ca_find/1' to json server 'https://idmserver.idm.example.com/ipa/json' Systemwide CA database updated. Systemwide CA database updated. The ipa-certupdate command was successful",
"Certificate named \"NSS Certificate DB\" in token \"auditSigningCert cert-pki-ca\" in database \"/var/lib/pki-ca/alias\" renew success",
"certmonger: Certificate named \"NSS Certificate DB\" in token \"auditSigningCert cert-pki-ca\" in database \"/var/lib/pki-ca/alias\" will not be valid after 20160204065136.",
"certutil -L -d /etc/pki/pki-tomcat/alias",
"ipa-cacert-manage renew --external-cert-file= /tmp/servercert20110601.pem --external-cert-file= /tmp/cacert.pem",
"certutil -L -d /etc/pki/pki-tomcat/alias/",
"ipa-cert-fix The following certificates will be renewed: Dogtag sslserver certificate: Subject: CN=ca1.example.com,O=EXAMPLE.COM 201905222205 Serial: 13 Expires: 2019-05-12 05:55:47 Enter \"yes\" to proceed:",
"Enter \"yes\" to proceed: yes Proceeding. Renewed Dogtag sslserver certificate: Subject: CN=ca1.example.com,O=EXAMPLE.COM 201905222205 Serial: 268369925 Expires: 2021-08-14 02:19:33 Becoming renewal master. The ipa-cert-fix command was successful",
"ipactl status Directory Service: RUNNING krb5kdc Service: RUNNING kadmin Service: RUNNING httpd Service: RUNNING ipa-custodia Service: RUNNING pki-tomcatd Service: RUNNING ipa-otpd Service: RUNNING ipa: INFO: The ipactl command was successful",
"ipactl restart --force",
"getcert list | egrep '^Request|status:|subject:' Request ID '20190522120745': status: MONITORING subject: CN=IPA RA,O=EXAMPLE.COM 201905222205 Request ID '20190522120834': status: MONITORING subject: CN=Certificate Authority,O=EXAMPLE.COM 201905222205",
"Request ID '20190522120835': status: CA_UNREACHABLE subject: CN=ca2.example.com,O=EXAMPLE.COM 201905222205",
"ipa-cert-fix Dogtag sslserver certificate: Subject: CN=ca2.example.com,O=EXAMPLE.COM Serial: 3 Expires: 2019-05-11 12:07:11 Enter \"yes\" to proceed: yes Proceeding. Renewed Dogtag sslserver certificate: Subject: CN=ca2.example.com,O=EXAMPLE.COM 201905222205 Serial: 15 Expires: 2019-08-14 04:25:05 The ipa-cert-fix command was successful",
"ipa-cacert-manage install /etc/group/cert.pem",
"NSSEnforceValidCerts off",
"systemctl restart httpd.service",
"ldapsearch -h server.example.com -p 389 -D \"cn=directory manager\" -w secret -LLL -b cn=config -s base \"(objectclass=*)\" nsslapd-validate-cert dn: cn=config nsslapd-validate-cert: warn",
"ldapmodify -D \"cn=directory manager\" -w secret -p 389 -h server.example.com dn: cn=config changetype: modify replace: nsslapd-validate-cert nsslapd-validate-cert: warn",
"systemctl restart dirsrv.target",
"ipa-server-certinstall --http --dirsrv ssl.key ssl.crt",
"systemctl restart httpd.service",
"systemctl restart dirsrv@ REALM .service",
"certutil -L -d /etc/httpd/alias",
"certutil -L -d /etc/dirsrv/slapd- REALM /",
"systemctl stop [email protected]",
"ca.crl.MasterCRL.autoUpdateInterval=60",
"systemctl start [email protected]",
"[root@ipa-server ~] ipa-ca-install",
"[root@ipa-server ~] ipa-ca-install --external-ca",
"ipa-ca-install --external-cert-file=/root/ master .crt --external-cert-file=/root/ca.crt",
"openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout new.key -out new.csr -subj '/CN= idmserver.idm.example.com ,O= IDM.EXAMPLE.COM '",
"ipa-server-certinstall -w --pin= password new.key new.crt",
"ipa-server-certinstall -d --pin= password new.key new.cert",
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa-pkinit-manage status PKINIT is enabled The ipa-pkinit-manage command was successful",
"ipa config-show Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers [...output truncated...] IPA masters capable of PKINIT: server1.example.com [...output truncated...]",
"kinit admin Password for [email protected]: ipa pkinit-status --server=server.idm.example.com ---------------- 1 server matched ---------------- Server name: server.idm.example.com PKINIT status: enabled ---------------------------- Number of entries returned 1 ----------------------------",
"ipa pkinit-status --server server.idm.example.com ----------------- 0 servers matched ----------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa-cacert-manage install -t CT,C,C ca.pem",
"ipa-certupdate",
"ipa-cacert-manage list CN=CA,O=Example Organization The ipa-cacert-manage command was successful",
"ipa-server-certinstall --kdc kdc.pem kdc.key systemctl restart krb5kdc.service",
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa-pkinit-manage enable Configuring Kerberos KDC (krb5kdc) [1/1]: installing X509 Certificate for PKINIT Done configuring Kerberos KDC (krb5kdc). The ipa-pkinit-manage command was successful",
"ipa pwpolicy-mod --minclasses= 1",
"ipa pwpolicy-add Group: group_name Priority: priority_level",
"ipa pwpolicy-find",
"ipa pwpolicy-mod --minlength=10",
"ipa pwpolicy-mod group_name --minlength=10",
"ipa pwpolicy-show",
"ipa pwpolicy-show group_name",
"ipa user-mod user_name --password-expiration='2016-02-03 20:37:34Z'",
"ldapmodify -D \"cn=Directory Manager\" -w secret -h server.example.com -p 389 -vv dn: uid= user_name ,cn=users,cn=accounts,dc= example ,dc= com changetype: modify replace: krbPasswordExpiration krbPasswordExpiration: 20160203203734Z",
"kinit user_name -l 90000",
"ipa krbtpolicy-mod --maxlife= 80000 Max life: 80000 Max renew: 604800",
"ipa krbtpolicy-mod admin --maxlife= 160000 Max life: 80000 Max renew: 604800",
"ldapsearch -x -b \"cn=computers,cn=accounts,dc=example,dc=com\" \"(&(krblastpwdchange>=20160101000000)(krblastpwdchange<=20161231235959))\" dn krbprincipalname",
"ldapsearch -x -b \"cn=services,cn=accounts,dc=example,dc=com\" \"(&(krblastpwdchange>=20160101000000)(krblastpwdchange<=20161231235959))\" dn krbprincipalname",
"ipa-getkeytab -p host/ [email protected] -s server.example.com -k /etc/krb5.keytab",
"ipa-getkeytab -p HTTP/ [email protected] -s server.example.com -k /etc/httpd/conf/ipa.keytab",
"klist -kt /etc/krb5.keytab Keytab: WRFILE:/etc/krb5.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 06/09/16 05:58:47 host/[email protected](aes256-cts-hmac-sha1-96) 2 06/09/16 11:23:01 host/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 krbtgt/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 HTTP/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 ldap/[email protected](aes256-cts-hmac-sha1-96)",
"chown apache /etc/httpd/conf/ipa.keytab",
"chmod 0600 /etc/httpd/conf/ipa.keytab",
"ipa-rmkeytab --realm EXAMPLE.COM --keytab /etc/krb5.keytab",
"ipa-rmkeytab --principal ldap/client.example.com --keytab /etc/krb5.keytab",
"ipa sudorule-add-option sudo_rule_name Sudo Option: first_option ipa sudorule-add-option sudo_rule_name Sudo Option: second_option",
"ipa sudorule-add-option sudo_rule_name Sudo Option: env_keep=\"COLORS DISPLAY EDITOR HOSTNAME HISTSIZE INPUTRC KDEDIR LESSSECURE LS_COLORS MAIL PATH PS1 PS2 XAUTHORITY\"",
"sudoers: files sss",
"vim /etc/nsswitch.conf sudoers: files sss",
"vim /etc/sssd/sssd.conf [sssd] config_file_version = 2 services = nss, pam, sudo domains = IPADOMAIN",
"systemctl enable rhel-domainname.service",
"nisdomainname example.com",
"echo \"NISDOMAIN= example.com \" >> /etc/sysconfig/network",
"systemctl restart rhel-domainname.service",
"[domain/ IPADOMAIN ] debug_level = 6 .",
"ipa sudocmd-add /usr/bin/less --desc=\"For reading log files\" ---------------------------------- Added sudo command \"/usr/bin/less\" ---------------------------------- sudo Command: /usr/bin/less Description: For reading log files",
"ipa sudocmdgroup-add files --desc=\"File editing commands\" ----------------------------------- Added sudo command group \"files\" ----------------------------------- sudo Command Group: files Description: File editing commands",
"ipa sudocmdgroup-add-member files --sudocmds \"/usr/bin/vim\" sudo Command Group: files Description: File editing commands Member sudo commands: /usr/bin/vim ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add files-commands -------------------------------- Added Sudo Rule \"files-commands\" -------------------------------- Rule name: files-commands Enabled: TRUE",
"ipa sudocmd-mod /usr/bin/less --desc=\"For reading log files\" ------------------------------------- Modified Sudo Command \"/usr/bin/less\" ------------------------------------- Sudo Command: /usr/bin/less Description: For reading log files Sudo Command Groups: files",
"ipa sudorule-mod sudo_rule_name --desc=\" sudo_rule_description \"",
"ipa sudorule-mod sudo_rule_name --order= 3",
"ipa sudorule-mod sudo_rule --usercat=all",
"ipa sudorule-add-option files-commands Sudo Option: !authenticate --------------------------------------------------------- Added option \"!authenticate\" to Sudo Rule \"files-commands\" ---------------------------------------------------------",
"ipa sudorule-remove-option files-commands Sudo Option: authenticate ------------------------------------------------------------- Removed option \"authenticate\" from Sudo Rule \"files-commands\" -------------------------------------------------------------",
"ipa sudorule-add-user files-commands --users=user --groups=user_group ------------------------- Number of members added 2 -------------------------",
"ipa sudorule-remove-user files-commands [member user]: user [member group]: --------------------------- Number of members removed 1 ---------------------------",
"ipa sudorule-add-host files-commands --hosts=example.com --hostgroups=host_group ------------------------- Number of members added 2 -------------------------",
"ipa sudorule-remove-host files-commands [member host]: example.com [member host group]: --------------------------- Number of members removed 1 ---------------------------",
"ipa sudorule-add-allow-command files-commands --sudocmds=/usr/bin/less --sudocmdgroups=files ------------------------- Number of members added 2 -------------------------",
"ipa sudorule-remove-allow-command files-commands [member sudo command]: /usr/bin/less [member sudo command group]: --------------------------- Number of members removed 1 ---------------------------",
"ipa sudorule-add-runasuser files-commands --users=user RunAs Users: user",
"kinit admin Password for [email protected]:",
"ipa sudorule-add new_sudo_rule --desc=\"Rule for user_group\" --------------------------------- Added Sudo Rule \"new_sudo_rule\" --------------------------------- Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE",
"ipa sudorule-add-user new_sudo_rule --groups=user_group Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE User Groups: user_group ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host new_sudo_rule --hostgroups=host_group Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE User Groups: user_group Host Groups: host_group ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-mod new_sudo_rule --cmdcat=all ------------------------------ Modified Sudo Rule \"new_sudo_rule\" ------------------------------ Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group",
"ipa sudorule-add-option new_sudo_rule Sudo Option: !authenticate ----------------------------------------------------- Added option \"!authenticate\" to Sudo Rule \"new_sudo_rule\" ----------------------------------------------------- Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group Sudo Option: !authenticate",
"ipa sudorule-show new_sudo_rule Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group Sudo Option: !authenticate",
"ipa sudocmd-show /usr/bin/less Sudo Command: /usr/bin/less Description: For reading log files. Sudo Command Groups: files",
"ipa sudorule-disable sudo_rule_name ----------------------------------- Disabled Sudo Rule \"sudo_rule_name\" -----------------------------------",
"ipa sudorule-enable sudo_rule_name ----------------------------------- Enabled Sudo Rule \"sudo_rule_name\" -----------------------------------",
"ipa hbacrule-add Rule name: rule_name --------------------------- Added HBAC rule \"rule_name\" --------------------------- Rule name: rule_name Enabled: TRUE",
"ipa hbacrule-add-user Rule name: rule_name [member user]: [member group]: group_name Rule name: rule_name Enabled: TRUE User Groups: group_name ------------------------- Number of members added 1 -------------------------",
"ipa hbacrule-add-user rule_name --users= user1 --users= user2 --users= user3 Rule name: rule_name Enabled: TRUE Users: user1, user2, user3 ------------------------- Number of members added 3 -------------------------",
"ipa hbacrule-mod rule_name --usercat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name User category: all Enabled: TRUE",
"ipa hbacrule-add-host Rule name: rule_name [member host]: host.example.com [member host group]: Rule name: rule_name Enabled: TRUE Hosts: host.example.com ------------------------- Number of members added 1 -------------------------",
"ipa hbacrule-add-host rule_name --hosts= host1 --hosts= host2 --hosts= host3 Rule name: rule_name Enabled: TRUE Hosts: host1, host2, host3 ------------------------- Number of members added 3 -------------------------",
"ipa hbacrule-mod rule_name --hostcat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name Host category: all Enabled: TRUE",
"ipa hbacrule-add-service Rule name: rule_name [member HBAC service]: ftp [member HBAC service group]: Rule name: rule_name Enabled: TRUE Services: ftp ------------------------- Number of members added 1 -------------------------",
"ipa hbacrule-add-service rule_name --hbacsvcs= su --hbacsvcs= sudo Rule name: rule_name Enabled: TRUE Services: su, sudo ------------------------- Number of members added 2 -------------------------",
"ipa hbacrule-mod rule_name --servicecat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name Service category: all Enabled: TRUE",
"ipa hbactest User name: user1 Target host: example.com Service: sudo --------------------- Access granted: False --------------------- Not matched rules: rule1",
"ipa hbactest --user= user1 --host= example.com --service= sudo --rules= rule1 --------------------- Access granted: False --------------------- Not matched rules: rule1",
"ipa hbactest --user= user1 --host= example.com --service= sudo --rules= rule1 --rules= rule2 -------------------- Access granted: True -------------------- Matched rules: rule2 Not matched rules: rule1",
"ipa hbacrule-disable allow_all ------------------------------ Disabled HBAC rule \"allow_all\" ------------------------------",
"ipa hbacsvc-add tftp ------------------------- Added HBAC service \"tftp\" ------------------------- Service name: tftp",
"ipa hbacsvcgroup-add Service group name: login -------------------------------- Added HBAC service group \"login\" -------------------------------- Service group name: login",
"ipa hbacsvcgroup-add-member Service group name: login [member HBAC service]: sshd Service group name: login Member HBAC service: sshd ------------------------- Number of members added 1 -------------------------",
"semanage user -l Labelling MLS/ MLS/ SELinux User Prefix MCS Level MCS Range SELinux Roles guest_u user s0 s0 guest_r root user s0 s0-s0:c0.c1023 staff_r sysadm_r system_r unconfined_r staff_u user s0 s0-s0:c0.c1023 staff_r sysadm_r system_r unconfined_r sysadm_u user s0 s0-s0:c0.c1023 sysadm_r system_u user s0 s0-s0:c0.c1023 system_r unconfined_r unconfined_u user s0 s0-s0:c0.c1023 system_r unconfined_r user_u user s0 s0 user_r xguest_u user s0 s0 xguest_r",
"SELinux_user:MLS[:MCS]",
"[user1]@server ~]USD ipa config-show SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023",
"[user1@server ~]USD ipa config-mod --ipaselinuxusermaporder=\"unconfined_u:s0-s0:c0.c1023USDguest_u:s0USDxguest_u:s0USDuser_u:s0-s0:c0.c1023USDstaff_u:s0-s0:c0.c1023\"",
"[user1@server ~]USD ipa config-mod --ipaselinuxusermapdefault=\"guest_u:s0\"",
"[user1@server ~]USD ipa selinuxusermap-add --selinuxuser=\"xguest_u:s0\" selinux1 [user1@server ~]USD ipa selinuxusermap-add-user --users=user1 --users=user2 --users=user3 selinux1 [user1@server ~]USD ipa selinuxusermap-add-host --hosts=server.example.com --hosts=test.example.com selinux1",
"[user1@server ~]USD ipa selinuxusermap-add --hbacrule=webserver --selinuxuser=\"xguest_u:s0\" selinux1",
"[user1@server ~]USD ipa selinuxusermap-add-user --users=user1 selinux1",
"[user1@server ~]USD ipa selinuxusermap-remove-user --users=user2 selinux1",
"dn: idnsname=client1,idnsname=example.com.,cn=dns,dc=idm,dc=example,dc=com objectclass: top objectclass: idnsrecord idnsname: client1 Arecord: 192.0.2.1 Arecord: 192.0.2.2 Arecord: 192.0.2.3 AAAArecord: 2001:DB8::ABCD",
"ipa dnszone-add newserver.example.com",
"ipa dnszone-del server.example.com",
"[user@server ~]USD ipa dnszone-mod --allow-transfer=\"192.0.2.1;198.51.100.1;203.0.113.1\" example.com",
"dig @ipa-server zone_name AXFR",
"host -t MX mail.example.com. mail.example.com mail is handled by 10 server.example.com. host -t MX demo.example.com. demo.example.com. has no MX record. host -t A mail.example.com. mail.example.com has no A record host -t A demo.example.com. random.example.com has address 192.168.1.1",
"ipa dnsrecord-add zone_name record_name -- record_type_option=data",
"ipa dnsrecord-add example.com www --a-rec 192.0.2.123",
"ipa dnsrecord-add example.com \"*\" --a-rec 192.0.2.123",
"ipa dnsrecord-mod example.com www --a-rec 192.0.2.123 --a-ip-address 192.0.2.1",
"ipa dnsrecord-add example.com www --aaaa-rec 2001:db8::1231:5675",
"ipa dnsrecord-add server.example.com _ldap._tcp --srv-rec=\"0 51 389 server1.example.com.\" ipa dnsrecord-add server.example.com _ldap._tcp --srv-rec=\"1 49 389 server2.example.com.\"",
"ipa dnsrecord-add reverseNetworkIpAddress hostIpAddress --ptr-rec FQDN",
"ipa dnsrecord-add 2.0.192.in-addr.arpa 4 --ptr-rec server4.example.com.",
"ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.example.com.",
"ipa dnsrecord-del example.com www --a-rec 192.0.2.1",
"[user@server ~]USD ipa dnszone-disable zone.example.com ----------------------------------------- Disabled DNS zone \"example.com\" -----------------------------------------",
"[user@server ~]USD ipa dnszone-mod server.example.com --dynamic-update=TRUE",
"ipa-client-install --enable-dns-updates",
"vim /etc/sssd/sssd.conf",
"[domain/ipa.example.com]",
"dyndns_update = true",
"dyndns_ttl = 2400",
"ipa dnszone-mod idm.example.com. --dynamic-update=TRUE",
"ipa dnszone-mod idm.example.com. --update-policy='grant IDM.EXAMPLE.COM krb5-self * A; grant IDM.EXAMPLE.COM krb5-self * AAAA; grant IDM.EXAMPLE.COM krb5-self * SSHFP;'",
"ipa dnszone-mod idm.example.com. --allow-sync-ptr=True",
"ipa dnszone-mod 2.0.192.in-addr.arpa. --dynamic-update=TRUE",
"ipa dnsconfig-mod --allow-sync-ptr=true",
"dyndb \"ipa\" \"/usr/lib64/bind/ldap.so\" { sync_ptr yes; };",
"ipactl restart",
"ipa dnszone-mod zone.example.com --update-policy \"grant EXAMPLE.COM krb5-self * A; grant EXAMPLE.COM krb5-self * AAAA; grant EXAMPLE.COM krb5-self * SSHFP;\"",
"ipa dnsrecord-add idm.example.com. sub_zone1 --ns-rec= 192.0.2.1",
"ipa dnsforwardzone-add sub_zone1 .idm.example.com. --forwarder 192.0.2.1",
"[user@server ~]USD ipa dnsconfig-mod --forwarder=192.0.2.254 Global forwarders: 192.0.2.254",
"ipa dnsforwardzone-add --help",
"[user@server ~]USD ipa dnsforwardzone-add zone.test. --forwarder=172.16.0.1 --forwarder=172.16.0.2 --forward-policy=first Zone name: zone.test. Zone forwarders: 172.16.0.1, 172.16.0.2 Forward policy: first",
"[user@server ~]USD ipa dnsforwardzone-mod zone.test. --forwarder=172.16.0.3 Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: first",
"[user@server ~]USD ipa dnsforwardzone-mod zone.test. --forward-policy=only Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: only",
"[user@server ~]USD ipa dnsforwardzone-show zone.test. Zone name: zone.test. Zone forwarders: 172.16.0.5 Forward policy: first",
"[user@server ~]USD ipa dnsforwardzone-find zone.test. Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: first ---------------------------- Number of entries returned 1 ----------------------------",
"[user@server ~]USD ipa dnsforwardzone-del zone.test. ---------------------------- Deleted forward DNS zone \"zone.test.\" ----------------------------",
"[user@server ~]USD ipa dnsforwardzone-enable zone.test. ---------------------------- Enabled forward DNS zone \"zone.test.\" ----------------------------",
"[user@server ~]USD ipa dnsforwardzone-disable zone.test. ---------------------------- Disabled forward DNS zone \"zone.test.\" ----------------------------",
"[user@server ~]USD ipa dnsforwardzone-add-permission zone.test. --------------------------------------------------------- Added system permission \"Manage DNS zone zone.test.\" --------------------------------------------------------- Manage DNS zone zone.test.",
"[user@server ~]USD ipa dnsforwardzone-remove-permission zone.test. --------------------------------------------------------- Removed system permission \"Manage DNS zone zone.test.\" --------------------------------------------------------- Manage DNS zone zone.test.",
"[user@server]USD ipa dnszone-add 2.0.192.in-addr.arpa.",
"[user@server ~]USD ipa dnszone-add --name-from-ip= 192.0.2.0/24",
"[user@server ~]USD ipa dnszone-mod --allow-query=192.0.2.0/24;2001:DB8::/32;203.0.113.1 example.com",
"dig -t SRV +short _kerberos._tcp.idm.example.com 0 100 88 idmserver-01.idm.example.com. 0 100 88 idmserver-02.idm.example.com.",
"dig -t SRV +short _kerberos._tcp.idm.example.com _kerberos._tcp.germany._locations.idm.example.com. 0 100 88 idmserver-01.idm.example.com. 50 100 88 idmserver-02.idm.example.com.",
"ipa location-add germany ---------------------------- Added IPA location \"germany\" ---------------------------- Location name: germany",
"systemctl restart named-pkcs11",
"ipa location-find ----------------------- 2 IPA locations matched ----------------------- Location name: australia Location name: germany ----------------------------- Number of entries returned: 2 -----------------------------",
"ipa server-mod idmserver-01.idm.example.com --location=germany ipa: WARNING: Service named-pkcs11.service requires restart on IPA server idmserver-01.idm.example.com to apply configuration changes. -------------------------------------------------- Modified IPA server \"idmserver-01.idm.example.com\" -------------------------------------------------- Servername: idmserver-01.idm.example.com Min domain level: 0 Max domain level: 1 Location: germany Enabled server roles: DNS server, NTP server",
"systemctl restart named-pkcs11",
"nameserver 10.10.0.1 nameserver 10.10.0.2",
"nameserver 10.50.0.1 nameserver 10.50.0.3",
"nameserver 10.30.0.1",
"nameserver 10.30.0.1",
"ipa dns-update-system-records --dry-run IPA DNS records: _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]",
"ipa dns-update-system-records --dry-run --out dns_records_file.nsupdate IPA DNS records: _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]",
"cat dns_records_file.nsupdate zone example.com. server 192.0.2.1 ; IPA DNS records update delete _kerberos-master._tcp.example.com. SRV update add _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]",
"nsupdate -k tsig_key.file dns_records_file.nsupdate",
"nsupdate -y algorithm:keyname:secret dns_records_file.nsupdate",
"kinit principal_allowed_to_update_records @ REALM nsupdate -g dns_records_file.nsupdate",
"search example.com ; the IdM server nameserver 192.0.2.1 ; backup DNS servers nameserver 198.51.100.1 nameserver 198.51.100.2",
"dn: automountmapname=auto.master,cn=default,cn=automount,dc=example,dc=com objectClass: automountMap objectClass: top automountMapName: auto.master",
"ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/nsswitch.conf Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs",
"ipa-client-automount --server=ipaserver.example.com --location=boston",
"autofs_provider = ipa ipa_automount_location = default",
"automount: sss files",
"ipa-client-automount --no-sssd",
"# Other common LDAP naming # MAP_OBJECT_CLASS=\"automountMap\" ENTRY_OBJECT_CLASS=\"automount\" MAP_ATTRIBUTE=\"automountMapName\" ENTRY_ATTRIBUTE=\"automountKey\" VALUE_ATTRIBUTE=\"automountInformation\"",
"LDAP_URI=\"ldap:///dc=example,dc=com\"",
"LDAP_URI=\"ldap://ipa.example.com\" SEARCH_BASE=\"cn= location ,cn=automount,dc=example,dc=com\"",
"<autofs_ldap_sasl_conf usetls=\"no\" tlsrequired=\"no\" authrequired=\"yes\" authtype=\"GSSAPI\" clientprinc=\"host/[email protected]\" />",
"vim /etc/sssd/sssd.conf",
"[sssd] services = nss,pam, autofs",
"[nss] [pam] [sudo] [autofs] [ssh] [pac]",
"[domain/EXAMPLE] ldap_search_base = \"dc=example,dc=com\" ldap_autofs_search_base = \"ou=automount,dc=example,dc=com\"",
"systemctl restart sssd.service",
"automount: sss files",
"systemctl restart autofs.service",
"ls /home/ userName",
"automount -f -d",
"NFS_CLIENT_VERSMAX=3",
"ldapclient -v manual -a authenticationMethod=none -a defaultSearchBase=dc=example,dc=com -a defaultServerList=ipa.example.com -a serviceSearchDescriptor=passwd:cn=users,cn=accounts,dc=example,dc=com -a serviceSearchDescriptor=group:cn=groups,cn=compat,dc=example,dc=com -a serviceSearchDescriptor=auto_master:automountMapName=auto.master,cn= location ,cn=automount,dc=example,dc=com?one -a serviceSearchDescriptor=auto_home:automountMapName=auto_home,cn= location ,cn=automount,dc=example,dc=com?one -a objectClassMap=shadow:shadowAccount=posixAccount -a searchTimelimit=15 -a bindTimeLimit=5",
"svcadm enable svc:/system/filesystem/autofs",
"ldapclient -l auto_master dn: automountkey=/home,automountmapname=auto.master,cn= location ,cn=automount,dc=example,dc=com objectClass: automount objectClass: top automountKey: /home automountInformation: auto.home",
"ls /home/ userName",
"ldapmodify -x -D \"cn=directory manager\" -w password -h ipaserver.example.com -p 389 dn: cn= REALM_NAME ,cn=kerberos,dc=example,dc=com changetype: modify add: krbSupportedEncSaltTypes krbSupportedEncSaltTypes: des-cbc-crc:normal - add: krbSupportedEncSaltTypes krbSupportedEncSaltTypes: des-cbc-crc:special - add: krbDefaultEncSaltTypes krbDefaultEncSaltTypes: des-cbc-crc:special",
"allow_weak_crypto = true",
"kinit admin",
"ipa service-add nfs/ nfs-server.example.com",
"ipa-getkeytab -s ipaserver.example.com -p nfs/ nfs-server.example.com -k /etc/krb5.keytab",
"ipa service-show nfs/nfs-server.example.com Principal name: nfs/[email protected] Principal alias: nfs/[email protected] Keytab: True Managed by: nfs-server.example.com",
"yum install nfs-utils",
"[root@nfs-server ~] ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs",
"systemctl enable nfs-idmapd",
"/export *( rw ,sec=krb5:krb5i:krb5p) /home *( rw ,sec=krb5:krb5i:krb5p)",
"exportfs -rav",
"allow_weak_crypto = true",
"yum install nfs-utils",
"kinit admin",
"[root@nfs-client ~] ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs",
"systemctl enable rpc-gssd.service systemctl enable rpcbind.service",
"nfs-server.example.com:/export /mnt nfs4 sec=krb5p,rw nfs-server.example.com:/home /home nfs4 sec=krb5p,rw",
"mkdir -p /mnt/ mkdir -p /home",
"mount /mnt/ mount /home",
"[domain/EXAMPLE.COM] krb5_renewable_lifetime = 50d krb5_renew_interval = 3600",
"systemctl restart sssd",
"ipa automountlocation-add location",
"ipa automountlocation-add raleigh ---------------------------------- Added automount location \"raleigh\" ---------------------------------- Location: raleigh",
"ipa automountlocation-tofiles raleigh /etc/auto.master: /- /etc/auto.direct --------------------------- /etc/auto.direct:",
"--------------------------- /etc/auto.direct: /shared/man server.example.com:/shared/man",
"ipa automountkey-add raleigh auto.direct --key=/share --info=\"ro,soft,ipaserver.example.com:/home/share\" Key: /share Mount information: ro,soft,ipaserver.example.com:/home/share",
"ldapclient -a serviceSearchDescriptor=auto_direct:automountMapName=auto.direct,cn= location ,cn=automount,dc=example,dc=com?one",
"--------------------------- /etc/auto.share: man ipa.example.com:/docs/man ---------------------------",
"ipa automountmap-add-indirect location mapName --mount= directory [--parentmap= mapName ]",
"ipa automountmap-add-indirect raleigh auto.share --mount=/share -------------------------------- Added automount map \"auto.share\" --------------------------------",
"ipa automountkey-add raleigh auto.share --key=docs --info=\"ipa.example.com:/export/docs\" ------------------------- Added automount key \"docs\" ------------------------- Key: docs Mount information: ipa.example.com:/export/docs",
"ipa automountlocation-tofiles raleigh /etc/auto.master: /- /etc/auto.direct /share /etc/auto.share --------------------------- /etc/auto.direct: --------------------------- /etc/auto.share: man ipa.example.com:/export/docs",
"ldapclient -a serviceSearchDescriptor=auto_share:automountMapName=auto.share,cn= location ,cn=automount,dc=example,dc=com?one",
"ipa automountlocation-import location map_file [--continuous]",
"ipa automountlocation-import raleigh /etc/custom.map",
"NSSProtocol TLSv1.2 NSSCipherSuite +ecdhe_ecdsa_aes_128_sha,+ecdhe_ecdsa_aes_256_sha,+ecdhe_rsa_aes_128_sha,+ecdhe_rsa_aes_256_sha,+rsa_aes_128_sha,+rsa_aes_256_sha",
"sed -i 's/^NSSProtocol .*/NSSProtocol TLSv1.2/' /etc/httpd/conf.d/nss.conf sed -i 's/^NSSCipherSuite .*/NSSCipherSuite +ecdhe_ecdsa_aes_128_sha,+ecdhe_ecdsa_aes_256_sha,+ecdhe_rsa_aes_128_sha,+ecdhe_rsa_aes_256_sha,+rsa_aes_128_sha,+rsa_aes_256_sha/' /etc/httpd/conf.d/nss.conf",
"systemctl restart httpd",
"ldapmodify -h localhost -p 389 -D 'cn=directory manager' -W << EOF dn: cn=encryption,cn=config changeType: modify replace: sslVersionMin sslVersionMin: TLS1.2 EOF",
"systemctl restart dirsrv@ EXAMPLE-COM .service",
"systemctl stop dirsrv@ EXAMPLE-COM .service",
"sslVersionMin: TLS1.2",
"systemctl start dirsrv@ EXAMPLE-COM .service",
"sslVersionRangeStream=\"tls1_2:tls1_2\" sslVersionRangeDatagram=\"tls1_2:tls1_2\"",
"sed -i 's/tls1_[01]:tls1_2/tls1_2:tls1_2/g' /etc/pki/pki-tomcat/server.xml",
"systemctl restart [email protected]",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.example.com -p 389 -ZZ Enter LDAP Password: dn: cn=config changetype: modify replace: nsslapd-allow-anonymous-access nsslapd-allow-anonymous-access: rootdse modifying entry \"cn=config\"",
"systemctl restart dirsrv.target",
"ldapsearch -D \"cn=directory manager\" -w secret -b \"cn=config,cn=ldbm database,cn=plugins,cn=config\" nsslapd-dbcachesize nsslapd-db-locks nsslapd-dbcachesize: 10000000 nsslapd-db-locks: 50000",
"ldapsearch -D \"cn=directory manager\" -w secret -b \"cn=userRoot,cn=ldbm database,cn=plugins,cn=config\" nsslapd-cachememsize nsslapd-dncachememsize nsslapd-cachememsize: 10485760 nsslapd-dncachememsize: 10485760",
"dn: cn=config,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-dbcachesize nsslapd-dbcachesize: db_cache_size_in_bytes",
"ldapmodify -D \"cn=directory manager\" -w secret -x dn: cn=config,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-dbcachesize nsslapd-dbcachesize: 200000000",
"modifying entry \"cn=config,cn=ldbm database,cn=plugins,cn=config\"",
"dn: cn=userRoot,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-cachememsize nsslapd-cachememsize: entry_cache_size_in_bytes",
"grep '^dn: ' ldif_file | sed 's/^dn: //' | wc -l 92200",
"grep '^dn: ' ldif_file | sed 's/^dn: //' | wc -c 9802460",
"dn: cn=userRoot,cn=ldbm database,cn=plugins,cn=config changetype: modify Replace: nsslapd-dncachememsize Nsslapd-dncachememsize: dn_cache_size",
"dn: cn=MemberOf Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: off",
"dn: cn=Schema Compatibility,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: off",
"dn: cn=Content Synchronization,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: off",
"dn: cn=Retro Changelog Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: off",
"ipactl stop",
"dn: cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-db-locks: db_lock_number",
"systemctl start dirsrv.target",
"ldapadd -D \" binddn \" -y password_file -f ldif_file",
"dn: cn=MemberOf Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: on",
"systemctl restart dirsrv.target",
"fixup-memberof.pl -D \"cn=directory manager\" -j password_file -Z server_id -b \" suffix \" -f \"(objectClass=*)\" -P LDAP",
"dn: cn=Schema Compatibility,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: on",
"dn: cn=Content Synchronization,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: on",
"dn: cn=Retro Changelog Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: on",
"dn: cn=config,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-dbcachesize nsslapd-dbcachesize: backup_db_cache_size dn: cn=userRoot,cn=ldbm database,cn=plugins,cn=config changetype: modify Replace: nsslapd-dncachememsize Nsslapd-dncachememsize: backup_dn_cache_size - replace: nsslapd-cachememsize nsslapd-cachememsize: backup_entry_cache_size",
"systemctl stop dirsrv.target",
"dn: cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-db-locks: backup_db_lock_number",
"ipactl start",
"https://ipaserver.example.com/ipa/migration",
"[jsmith@server ~]USD kinit Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:",
"ldapmodify -x -D 'cn=directory manager' -w password -h ipaserver.example.com -p 389 dn: cn=config changetype: modify replace: nsslapd-sasl-max-buffer-size nsslapd-sasl-max-buffer-size: 4194304 modifying entry \"cn=config\"",
"ulimit -u 4096",
"ipa migrate-ds ldap://ldap.example.com:389",
"ipa migrate-ds --user-container=ou=employees --group-container=\"ou=employee groups\" ldap://ldap.example.com:389",
"ipa migrate-ds --user-objectclass=fullTimeEmployee ldap://ldap.example.com:389",
"ipa migrate-ds --group-objectclass=groupOfNames --group-objectclass=groupOfUniqueNames ldap://ldap.example.com:389",
"ipa migrate-ds --exclude-groups=\"Golfers Group\" --exclude-users=jsmith --exclude-users=bjensen ldap://ldap.example.com:389",
"ipa migrate-ds --user-objectclass=fullTimeEmployee --exclude-users=jsmith --exclude-users=bjensen --exclude-users=mreynolds ldap://ldap.example.com:389",
"ipa migrate-ds --user-ignore-attribute=userCertificate --user-ignore-objectclass=strongAuthenticationUser --group-ignore-objectclass=groupOfCertificates ldap://ldap.example.com:389",
"ipa migrate-ds --schema=RFC2307 ldap://ldap.example.com:389",
"ipa user-add TEST_USER",
"ipa user-show --all TEST_USER",
"ipa-compat-manage disable",
"systemctl restart dirsrv.target",
"ipa config-mod --enable-migration=TRUE",
"ipa migrate-ds ldap://ldap.example.com:389",
"ipa-compat-manage enable",
"systemctl restart dirsrv.target",
"ipa config-mod --enable-migration=FALSE",
"ipa-client-install --enable-dns-update",
"https:// ipaserver.example.com /ipa/migration",
"[user@server ~]USD ldapsearch -LL -x -D 'cn=Directory Manager' -w secret -b 'cn=users,cn=accounts,dc=example,dc=com' '(&(!(krbprincipalkey=*))(userpassword=*))' uid",
"ipa migrate-ds --ca-cert-file= /etc/ipa/remote.crt ldaps:// ldap.example.com :636",
"KRB5_TRACE=/dev/stdout ipa cert-find",
"systemctl restart httpd.service",
"KRB5_TRACE=/dev/stdout kinit admin",
"host client_fully_qualified_domain_name",
"host server_fully_qualified_domain_name",
"host server_IP_address",
"server.example.com.",
"systemctl status krb5kdc # systemctl status dirsrv.target",
"ipactl status Directory Service: RUNNING krb5kdc Service: RUNNING kadmin Service: RUNNING named Service: RUNNING httpd Service: RUNNING ipa-custodia Service: RUNNING ntpd Service: RUNNING pki-tomcatd Service: RUNNING ipa-otpd Service: RUNNING ipa-dnskeysyncd Service: RUNNING ipa: INFO: The ipactl command was successful",
"dig -t TXT _kerberos. ipa.example.com USD dig -t SRV _kerberos._udp. ipa.example.com USD dig -t SRV _kerberos._tcp. ipa.example.com",
"; <<>> DiG 9.11.0-P2-RedHat-9.11.0-6.P2.fc25 <<>> -t SRV _kerberos._tcp.ipa.server.example ;; global options: +cmd ;; connection timed out; no servers could be reached",
"systemctl status httpd.service # systemctl status dirsrv@ IPA-EXAMPLE-COM .service",
"systemctl restart httpd",
"klist -kt /etc/dirsrv/ds.keytab Keytab name: FILE:/etc/dirsrv/ds.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 01/10/2017 14:54:39 ldap/[email protected] 2 01/10/2017 14:54:39 ldap/[email protected] [... output truncated ...]",
"kinit admin USD kvno ldap/ [email protected]",
"getcert list Number of certificates and requests being tracked: 8. [... output truncated ...] Request ID '20170421124617': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/dirsrv/slapd-IPA-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/dirsrv/slapd-IPA-EXAMPLE-COM/pwdfile.txt' certificate: type=NSSDB,location='/etc/dirsrv/slapd-IPA-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IPA.EXAMPLE.COM subject: CN=ipa.example.com,O=IPA.EXAMPLE.COM expires: 2019-04-22 12:46:17 UTC [... output truncated ...] Request ID '20170421130535': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IPA.EXAMPLE.COM subject: CN=ipa.example.com,O=IPA.EXAMPLE.COM expires: 2019-04-22 13:05:35 UTC [... output truncated ...]",
"dig _ldap._tcp.ipa.example.com. SRV ; <<>> DiG 9.9.4-RedHat-9.9.4-48.el7 <<>> _ldap._tcp.ipa.example.com. SRV ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17851 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;_ldap._tcp.ipa.example.com. IN SRV ;; ANSWER SECTION: _ldap._tcp.ipa.example.com. 86400 IN SRV 0 100 389 ipaserver.ipa.example.com. ;; AUTHORITY SECTION: ipa.example.com. 86400 IN NS ipaserver.ipa.example.com. ;; ADDITIONAL SECTION: ipaserver.ipa.example.com. 86400 IN A 192.0.21 ipaserver.ipa.example.com 86400 IN AAAA 2001:db8::1",
"host server.ipa.example.com server.ipa.example.com. 86400 IN A 192.0.21 server.ipa.example.com 86400 IN AAAA 2001:db8::1",
"ipa dnszone-show zone_name USD ipa dnsrecord-show zone_name record_name_in_the_zone",
"systemctl restart named-pkcs11",
"ipa dns-update-system-records --dry-run",
"dig +short server2.example.com A dig +short server2.example.com AAAA dig +short -x server2_IPv4_or_IPv6_address",
"dig +short server1.example.com A dig +short server1.example.com AAAA dig +short -x server1_IPv4_or_IPv6_address",
"kinit -kt /etc/dirsrv/ds.keytab ldap/ server1.example.com klist ldapsearch -Y GSSAPI -h server1.example.com -b \"\" -s base ldapsearch -Y GSSAPI -h server2_FQDN . -b \"\" -s base",
"ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/ configuration_file ' returned non-zero exit status 1 Configuration of CA failed",
"env|grep proxy http_proxy=http://example.com:8080 ftp_proxy=http://example.com:8080 https_proxy=http://example.com:8080",
"for i in ftp http https; do unset USD{i}_proxy; done",
"pkidestroy -s CA -i pki-tomcat; rm -rf /var/log/pki/pki-tomcat /etc/sysconfig/pki-tomcat /etc/sysconfig/pki/tomcat/pki-tomcat /var/lib/pki/pki-tomcat /etc/pki/pki-tomcat /root/ipa.csr",
"ipa-server-install --uninstall",
"ipaserver named[6886]: failed to dynamically load driver 'ldap.so': libldap-2.4.so.2: cannot open shared object file: No such file or directory",
"yum remove bind-chroot",
"ipactl restart",
"CRITICAL Failed to restart the directory server Command '/bin/systemctl restart [email protected]' returned non-zero exit status 1",
"slapd_ldap_sasl_interactive_bind - Error: could not perform interactive bind for id [] mech [GSSAPI]: error -2 (Local error) (SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Credentials cache file '/tmp/krb5cc_496' not found))",
"set_krb5_creds - Could not get initial credentials for principal [ldap/ replica1.example.com] in keytab [WRFILE:/etc/dirsrv/ds.keytab]: -1765328324 (Generic error)",
"Replication bind with GSSAPI auth resumed",
"ipa: DEBUG: approved_usage = SSLServer intended_usage = SSLServer ipa: DEBUG: cert valid True for \"CN=replica.example.com,O=EXAMPLE.COM\" ipa: DEBUG: handshake complete, peer = 192.0.2.2:9444 Certificate operation cannot be completed: Unable to communicate with CMS (Not Found) ipa: DEBUG: Created connection context.ldap2_21534032 ipa: DEBUG: Destroyed connection context.ldap2_21534032 The DNS forward record replica.example.com. does not match the reverse address replica.example.org",
"Certificate operation cannot be completed: EXCEPTION (Certificate serial number 0x2d not found)",
"ipa-replica-manage list-ruv server1.example.com:389: 6 server2.example.com:389: 5 server3.example.com:389: 4 server4.example.com:389: 12",
"ipa-replica-manage clean-ruv 6 ipa-replica-manage clean-ruv 5 ipa-replica-manage clean-ruv 4 ipa-replica-manage clean-ruv 12",
"dn: cn=clean replica_ID , cn=cleanallruv, cn=tasks, cn=config objectclass: extensibleObject replica-base-dn: dc= example ,dc= com replica-id: replica_ID replica-force-cleaning: no cn: clean replica_ID",
"ldapsearch -p 389 -h IdM_node -D \"cn=directory manager\" -W -b \"cn=config\" \"(objectclass=nsds5replica)\" nsDS5ReplicaId",
"Jun 30 11:11:48 server1 krb5kdc[1279](info): AS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: NEEDED_PREAUTH: admin EXAMPLE COM for krbtgt/EXAMPLE COM EXAMPLE COM, Additional pre-authentication required Jun 30 11:11:48 server1 krb5kdc[1279](info): AS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: ISSUE: authtime 1309425108, etypes {rep=18 tkt=18 ses=18}, admin EXAMPLE COM for krbtgt/EXAMPLE COM EXAMPLE COM Jun 30 11:11:49 server1 krb5kdc[1279](info): TGS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: UNKNOWN_SERVER: authtime 0, admin EXAMPLE COM for HTTP/[email protected], Server not found in Kerberos database",
"debug_level = 9",
"systemctl start sssd",
"ipa: ERROR: Kerberos error: ('Unspecified GSS failure. Minor code may provide more information', 851968)/('Decrypt integrity check failed', -1765328353)",
"Wed Jun 14 18:24:03 2017) [sssd[pam]] [child_handler_setup] (0x2000): Setting up signal handler up for pid [12370] (Wed Jun 14 18:24:03 2017) [sssd[pam]] [child_handler_setup] (0x2000): Signal handler set up for pid [12370] (Wed Jun 14 18:24:08 2017) [sssd[pam]] [pam_initgr_cache_remove] (0x2000): [idmeng] removed from PAM initgroup cache (Wed Jun 14 18:24:13 2017) [sssd[pam]] [p11_child_timeout] (0x0020): Timeout reached for p11_child. (Wed Jun 14 18:24:13 2017) [sssd[pam]] [pam_forwarder_cert_cb] (0x0040): get_cert request failed. (Wed Jun 14 18:24:13 2017) [sssd[pam]] [pam_reply] (0x0200): pam_reply called with result [4]: System error.",
"certificate_verification = ocsp_default_responder= http://ocsp.proxy.url , ocsp_default_responder_signing_cert= nickname",
"systemctl restart sssd.service",
"ipa: ERROR: Insufficient access: Insufficient 'add' privilege to add the entry 'cn=testvault,cn=user,cn=users,cn=vaults,cn=kra,dc=example,dc=com'.",
"kinit admin",
"ipa vaultcontainer-add-owner --user= user --users= user Owner users: admin, user Vault user: user ------------------------ Number of owners added 1 ------------------------",
"kinit user ipa vault-add testvault2 ------------------------ Added vault \"testvault2\" ------------------------",
"/var/log/httpd/*log { missingok notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }",
"ipa-replica-prepare replica.example.com --ip-address 192.0.2.2 Directory Manager (existing master) password: Do you want to configure the reverse zone? [yes]: no Preparing replica for replica.example.com from server.example.com Creating SSL certificate for the Directory Server Creating SSL certificate for the dogtag Directory Server Saving dogtag Directory Server port Creating SSL certificate for the Web Server Exporting RA certificate Copying additional files Finalizing configuration Packaging replica information into /var/lib/ipa/replica-info-replica.example.com.gpg Adding DNS records for replica.example.com Waiting for replica.example.com. A or AAAA record to be resolvable This can be safely interrupted (Ctrl+C) The ipa-replica-prepare command was successful",
"yum install ipa-server",
"scp /var/lib/ipa/replica-info-replica.example.com.gpg root@ replica :/var/lib/ipa/",
"ipa-replica-install /var/lib/ipa/replica-info-replica.example.com.gpg Directory Manager (existing master) password: Run connection check to master Check connection from replica to remote master 'server.example.com': Connection from replica to master is OK. Start listening on required ports for remote master check Get credentials to log in to remote master [email protected] password: Check SSH connection to remote master Connection from master to replica is OK. Configuring NTP daemon (ntpd) [1/4]: stopping ntpd [2/4]: writing configuration Restarting Directory server to apply updates [1/2]: stopping directory server [2/2]: starting directory server Done. Restarting the directory server Restarting the KDC Restarting the web server",
"ipa-replica-install /var/lib/ipa/ replica-info-replica.example.com.gpg --setup-dns --forwarder 198.51.100.0",
"ipa-replica-install /var/lib/ipa/ replica-info-replica.example.com.gpg --setup-ca",
"ipa-replica-prepare replica.example.com --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret --dirsrv-cert-file /tmp/server.crt",
"ipa-replica-manage list server1.example.com : master server2.example.com: master server3.example.com: master server4.example.com: master",
"ipa-replica-manage list server1.example.com server2.example.com: replica server3.example.com: replica",
"ipa-replica-manage connect server1.example.com server2.example.com",
"ipa-replica-manage disconnect server1.example.com server4.example.com",
"ipa-replica-manage del server2.example.com",
"ipa-replica-manage force-sync --from server1.example.com",
"ipa-replica-manage re-initialize --from server1.example.com",
"ipa-replica-manage list server1.example.com: master server2.example.com: master server3.example.com: master server4.example.com: master",
"ipa-replica-manage del server3.example.com",
"ipa-csreplica-manage del server3.example.com",
"ipa-server-install --uninstall -U",
"ipa config-show | grep \"CA renewal master\" IPA CA renewal master: server.example.com",
"ldapsearch -H ldap://USDHOSTNAME -D 'cn=Directory Manager' -W -b 'cn=masters,cn=ipa,cn=etc,dc=example,dc=com' '(&(cn=CA)(ipaConfigString=caRenewalMaster))' dn CA, server.example.com, masters, ipa, etc, example.com dn: cn=CA,cn= server.example.com ,cn=masters,cn=ipa,cn=etc,dc=example,dc=com",
"ipa config-mod --ca-renewal-master-server new_server.example.com",
"ipa-csreplica-manage set-renewal-master"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html-single/Linux_Domain_Identity_Authentication_and_Policy_Guide/index.html |
A.4. tuned | A.4. tuned Tuned is a tuning daemon that can adapt the operating system to perform better under certain workloads by setting a tuning profile. It can also be configured to react to changes in CPU and network use and adjusts settings to improve performance in active devices and reduce power consumption in inactive devices. To configure dynamic tuning behavior, edit the dynamic_tuning parameter in the /etc/tuned/tuned-main.conf file. Tuned then periodically analyzes system statistics and uses them to update your system tuning settings. You can configure the time interval in seconds between these updates with the update_interval parameter. For further details about tuned, see the man page: | [
"man tuned"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-tuned |
D.2. Model Explorer View | D.2. Model Explorer View Teiid Designer allows you manage multiple projects containing multiple models and any corresponding or dependent resources. The Model Explorer provides a simple file-structured view of these resources. The Model Explorer (shown below) is comprised of a toolbar and a tree view. Figure D.2. Model Explorer View The toolbar consists of nine common actions: Preview Data - Executes a simple preview query (SELECT * FROM ) . Sort Model Contents - Sorts the contents of the models based on object type and alphabetizing. Refresh Markers - Refreshes error and warning markers for objects in tree. Up - Navigates up one folder/container location. (See Eclipse Help) Collapse All - Collapses all projects. Link with Editor - When object is selected in an open editor, this option auto-selects and reveals object in Model Explorer. Additional Actions The additional actions are shown in the following figure: Figure D.3. Additional Actions If the Show Imports checkbox is selected, the imports will be displayed directly under a model resource as shown below. Figure D.4. Show Model Imports Action | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/model_explorer_view |
Chapter 6. Creating and managing activation keys in the Red Hat Hybrid Cloud Console | Chapter 6. Creating and managing activation keys in the Red Hat Hybrid Cloud Console Your organization's activation keys are listed on the Activation Keys page in the Red Hat Hybrid Cloud Console. You can use an activation key as an authentication token to register a system with Red Hat hosted services, such as Red Hat Subscription Manager or remote host configuration (RHC). Administrators can create, edit, and delete activation keys for your organization. They also have the option to set system-level features, such as system purpose, on an activation key. When you use a preconfigured activation key to register a system, all the selected attributes are automatically applied at the time of registration. 6.1. Activation key management in the Red Hat Hybrid Cloud Console An activation key is a preshared authentication token that enables authorized users to register and configure systems. It eliminates the need to store, use, and share a personal username and password combination, which increases security and facilitates automation. For example, you can use a preconfigured activation key to automatically register a system with all the required system-level features. Additionally, you can put preconfigured activation keys in Kickstart scripts to bulk -provision the registration of multiple systems. You can use an activation key and a numeric organization identifier (organization ID) to register a system with Red Hat hosted services, such as Red Hat Subscription Manager or remote host configuration (RHC). Your organization's activation keys and organization ID are displayed on the Activation Keys page in the Hybrid Cloud Console. Each user's access to the activation keys in the Hybrid Cloud Console is managed through a role-based access control (RBAC) system. Users in the Organization Administrator group for your organization use the RBAC system to assign roles, such as RHC user and RHC administrator, to users within your organization. An RHC user can view the activation keys in the table on the Activation Keys page. Only an RHC administrator is authorized to use the Hybrid Cloud Console user interface to create, edit, and delete activation keys. An RHC administrator also has the option to configure an activation key to apply system purpose attributes (role, service level agreement, or usage) to the system during the registration process. An Organization Administrator has the RHC administrator role by default. In the terminal, users with root privileges can use the activation key and the organization ID to register the system with a single command. If the activation key has been preconfigured with system purpose attributes, the specified attributes are automatically applied to the system upon registration. Additional resources For more information about RBAC roles, see User Access Configuration Guide for Role-based Access Control (RBAC) . For more information about system purpose, see System purpose configuration in Getting Started with RHEL System Registration . 6.2. Creating an activation key As an RHC administrator, you can use the Hybrid Cloud Console interface to create preconfigured activation keys that authorized users in your organization can use to register systems to Red Hat hosted services, such as Red Hat Subscription Manager or remote host configuration (RHC). An activation key requires a unique name that enables users to use the activation key by entering the activation key name and organization ID, without requiring a username or password. An activation key can also contain system purpose attributes that can be automatically applied to individual systems at the time of registration. The activation keys that you create can be viewed in the table on the Activation Keys page and used to register systems in the terminal. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To create an activation key in the Hybrid Cloud Console, perform the following steps: Navigate to Red Hat Hybrid Cloud Console > Red Hat Insights > Inventory > System Configuration > Activation Keys . From the Activation Keys page, click Create activation key . In the Name field, enter a unique name for the activation key. Note Your activation key name must be unique, may contain only numbers, letters, underscores, and hyphens, and contain fewer than 256 characters. If you enter a name that already exists in your organization, you will receive an error message and the key will not be created. Optional: To add system purpose attributes to the activation key, navigate to the system purpose field that you want to populate. From the drop-down list, select the attribute value that you want to apply to the system. Note Only the system purpose attributes that are available to your organization's account are selectable. When you have populated all the required fields, click Create . Note === The Create activation key button is disabled until a valid name is entered into the Name field. If the button remains disabled after populating the Name field, check that the name meets the noted criteria and that you are logged in to the Hybrid Cloud Console with the required RBAC role. For questions regarding your RBAC role, contact an Organization Administrator. === 6.3. Viewing an activation key As an RHC user, you can view your organization's numeric identifier (organization ID) and available activation keys on the Activation Keys page in the Hybrid Cloud Console. The activation keys and their respective details are presented in a table. The Name column contains the name of the activation key. The Role column contains the role value for the system purpose attribute set on the key. A potential role value is Red Hat Enterprise Linux Server . The SLA column contains the service level agreement value for the system purpose attribute set on the key. A potential service level agreement value is Premium . The Usage column contains the usage value for the system purpose attribute set on the key. A potential usage value is Production . If no system purpose attribute is set on the activation key, the respective field contains no value. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC user or RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To view an activation key in the Hybrid Cloud Console, perform the following steps: Navigate to Red Hat Hybrid Cloud Console > Red Hat Insights > Inventory > System Configuration > Activation Keys . 6.4. Using an activation key to register a system with Red Hat Subscription Manager The activation keys that you create in the Hybrid Cloud Console combine all the system registration steps into one secure, automated process. As a user with root privileges you can register the system, apply pre-configured system purpose attributes, and enable repositories with a single command. Root users can pass an activation key and a numeric organization identifier (organization ID) to the command line tools used to register a system to Red Hat hosted services such as Red Hat Subscription Manager or remote host configuration (RHC). If an RHC administrator has preconfigured the activation key to apply selected system purpose attributes, those attributes are automatically applied to the system during the registration process. Prerequisites You have root privileges or their equivalent to run the commands in the following procedure. You have the numeric identifier for your organization (organization ID). Procedure To use an activation key to register a system with Subscription Manager, perform the following steps: From the terminal, enter the following command where <activation_key_name> is the name of the activation key you want to use and <1234567> is your organization ID: The expected output confirms that your system is registered. For example: 6.5. Using an activation key to register a system with remote host configuration (RHC) The activation keys that you create in the Hybrid Cloud Console combine all the system registration steps into one secure, automated process. As a user with root privileges you can register the system, apply pre-configured system purpose attributes, and enable repositories with a single command. Root users can pass an activation key and a numeric organization identifier (organization ID) to the command line tools used to register a system to Red Hat hosted services such as Red Hat Subscription Manager or remote host configuration (RHC). If an RHC administrator has pre-configured the activation key to apply selected system purpose attributes, those attributes are automatically applied to the system during the registration process. Prerequisites You have root privileges or their equivalent to run the commands in the following procedure. You have the numeric identifier for your organization (organization ID). Procedure To use an activation key to register a system with RHC, perform the following steps: From the terminal, enter the following command where <activation_key_name> is the name of the activation key you want to use and <1234567> is your organization ID: 6.6. Editing an activation key As an RHC administrator, you can use the Hybrid Cloud Console interface to edit the activation keys on the Activation Keys page. Specifically, you can add, update, or remove the system purpose attributes on an existing activation key. However, you cannot edit the name of the activation key itself. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To edit an activation key in the Hybrid Cloud Console, perform the following steps: Navigate to Red Hat Hybrid Cloud Console > Red Hat Insights > Inventory > System Configuration > Activation Keys . From the Activation Keys page, locate the row that contains the activation key that you want to edit. Click More options and select Edit from the overflow menu. To update a system purpose attribute on the activation key, navigate to the system purpose field that you want to change. From the drop-down list, select the attribute value that you want to apply to the system. To remove a system purpose attribute from the activation key, navigate to the system purpose field that you want to clear and deselect the unwanted value from the drop-down list. To update the activation key, click Save changes . 6.7. Deleting an activation key As an RHC administrator, you can use the Hybrid Cloud Console interface to delete an activation key from the table on the Activation Keys page. You might want to delete an unwanted or compromised activation key for security or maintenance purposes. However, deleting an activation key that is referenced in an automation script will impact the ability of that automation to function. To avoid any negative impacts to your automated processes, either remove the unwanted activation key from the script or retire the automation script prior to deleting the key. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have the RHC administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To delete an activation key in the Hybrid Cloud Console, perform the following steps: Navigate to Red Hat Hybrid Cloud Console > Red Hat Insights > Inventory > System Configuration > Activation Keys . From the Activation Keys page, locate the row containing the activation key that you want to delete. Click More options and select Delete from the overflow menu. In the Delete Activation Key window, review the information about deleting activation keys. If you want to continue with the deletion, click Delete . Important === Deleting this activation key will impact any automation that references it. To avoid any negative consequences of deleting this key, retire any automation script that uses this key or remove any references of this key from your Kickstart scripts. | [
"subscription-manager register --activationkey= <activation_key_name> --org= <1234567>",
"The system has been registered with id: 62edc0f8-855b-4184-b1b8-72a9dc793b96",
"rhc connect --activation-key <activation_key_name> --organization <1234567>"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/remote_host_configuration_and_management/activation-keys_intro-rhc |
Chapter 1. Release notes for Red Hat build of Apache Camel for Quarkus 3.8 / 3.8.6.SP2 | Chapter 1. Release notes for Red Hat build of Apache Camel for Quarkus 3.8 / 3.8.6.SP2 1.1. Red Hat build of Apache Camel for Quarkus features Fast startup and low RSS memory Using the optimized build-time and ahead-of-time (AOT) compilation features of Quarkus, your Camel application can be pre-configured at build time resulting in fast startup times. Application generator Use the Quarkus application generator to bootstrap your application and discover its extension ecosystem. Highly configurable All the important aspects of a Red Hat build of Apache Camel for Quarkus application can be set up programmatically with CDI (Contexts and Dependency Injection) or by using configuration properties. By default, a CamelContext is configured and automatically started for you. Check out the Configuring your Quarkus applications by using a properties file guide for more information on the different ways to bootstrap and configure an application. Integrates with existing Quarkus extensions Red Hat build of Apache Camel for Quarkus provides extensions for libraries and frameworks that are used by some Camel components which inherit native support and configuration options. 1.2. Supported platforms, configurations, databases, and extensions For information about supported platforms, configurations, and databases in Red Hat build of Apache Camel 4.4 for Quarkus 3 GA, see the Supported Configuration page on the Customer Portal (login required). For a list of Red Hat Red Hat build of Apache Camel for Quarkus extensions and the Red Hat support level for each extension, see the Extensions Overview chapter of the Red Hat build of Apache Camel for Quarkus Reference (login required). 1.3. BOM files for Red Hat build of Apache Camel for Quarkus To configure your Red Hat Red Hat build of Apache Camel for Quarkus version 3.8 projects to use the supported extensions, use the latest Bill Of Materials (BOM) version 3.8.6.SP2-redhat-00002 or newer, from the Redhat Maven Repository . For more information about BOM dependency management, see Developing Applications with Red Hat build of Apache Camel for Quarkus 1.4. Technology preview extensions Items designated as Technology Preview in the Extensions Overview chapter of the Red Hat build of Apache Camel for Quarkus Reference have limited supportability, as defined by the Technology Preview Features Support Scope. 1.5. Product errata and security advisories 1.5.1. Red Hat build of Apache Camel for Quarkus For the latest Red Hat build of Apache Camel for Quarkus product errata and security advisories, see the Red Hat Product Errata page. 1.5.2. Red Hat build of Quarkus For the latest Red Hat build of Quarkus product errata and security advisories, see the Red Hat Product Errata page. 1.6. Known issues 1.6.1. NoClassDefFoundError when compiling native project Compiling a native project can result in a NoClassDefFoundError like this: Caused by: java.lang.NoClassDefFoundError: io/netty/handler/codec/socksx/v5/Socks5InitialRequest at java.base/jdk.internal.misc.Unsafe.ensureClassInitialized0(Native Method) at java.base/jdk.internal.misc.Unsafe.ensureClassInitialized(Unsafe.java:1160) at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.classinitialization.ClassInitializationSupport.ensureClassInitialized(ClassInitializationSupport.java:177) ... 55 more Caused by: java.lang.ClassNotFoundException: io.netty.handler.codec.socksx.v5.Socks5InitialRequest at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) at java.base/jdk.internal.loader.ClassLoadersUSDAppClassLoader.loadClass(ClassLoaders.java:188) The reason is that netty-codec-socks is excluded in the quarkus-qpid-jms extension, which camel-quarkus-amqp depends on. Workaround You can avoid the errror by manually adding the netty-codec-socks depencency. 1.6.2. Issues with Quarkus on AArch64 systems There currently are problems and limitations with Quarkus 3.8 on AArch64 systems. For more information, see the release notes for Red Hat build of Quarkus 3.8: Missing native library for the Kafka Streams extension on AArch64 AArch64 support limitations in JVM mode testing on OpenShift 1.6.3. Websocket + Knative does not work with HTTP2 We support both camel-quarkus-grpc and camel-vertx-websocket with Knative. gRPC needs HTTP2 (you can find instructions on how to enable it here: HTTP2 on Knative ). Unfortunately, Websockets with Knative does not work with HTTP2 (see Ingress Operator in OpenShift Container Platform ). Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. 1.6.4. Other known issues Quarkus native compilation of camel-quarkus-jackson-avro fails If you try to compile a Camel Quarkus application that uses the camel-quarkus-jackson-avro extension to native code, the compilation fails with an UnsupportedFeatureException error. As a workaround, you can build the application with camel-quarkus-jackson-avro extension using the parameter -Dquarkus.native.additional-build-args=--initialize-at-run-time=org.apache.avro.file.DataFileWriter or add that configuration property to your src/main/resources/application.properties file. For more information, see the support article "Quarkus native compilation of camel-quarkus-jackson-avro fails" in the customer portal. (Login required) Moving from smallrye-metrics to camel-quarkus-micrometer requires manual registration of beans If you are migrating to micrometer from smallrye-metrics , you may need to manually define some beans as scoped. In smallrye-metrics , classes that are registered for metrics (for example with @COUNTED , @METRIC ), but not registered as scoped beans, are registered automatically. This does not happen in micrometer . In micrometer you need to manually register beans accessed via CDI, by for example adding a @Dependent annotation. Camel-quarkus-snmp not supported in Native In Red Hat build of Apache Camel for Quarkus we support the camel-quarkus-snmp component in JVM mode only. 1.7. Known Quarkus CXF issues Note CXF is fully supported, but the following issues remain with this release of Red Hat build of Apache Camel for Quarkus. 1.7.1. Name clash between Service methods with the same name in one Java package If there are two SEIs in one Java package, both having a @WebMethod with the same name but different signature, then the default name for the generated request, response and possibly other classes is the same for both methods of both classes. As of Quarkus CXF 3.8.3, no exception is thrown when this happens during the class generation at build time. At runtime, only one set of those classes is present and therefore calls to one of the clients fail inevitably. 1.8. Beans not injected into a service implementation with @CXFEndpoint("/my-path") The @CXFEndpoint annotation was mistakenly introduced in Red Hat build of Apache Camel for Quarkus 3.8.4.SP2 and Quarkus CXF 3.8.4. It is reverted in the of Red Hat build of Apache Camel for Quarkus 3.8 and Quarkus CXF 3.8.5. The annotation allows you to specify CXF service endpoint paths through the new annotation @CXFEndpoint("/myPath") . This does not work well for service implementation classes having both @WebService and @CXFEndpoint annotations. In those cases, if the service has some @Inject fields, those fields are left blank and the service call throws a NullPointerException . Service implementations that do not have the @CXFEndpoint annotations are unaffected. We recommend that you continue to specify service endpoint paths in application.properties as before: Example quarkus.cxf.endpoint."/myPath".implementor = org.acme.MyServiceImpl 1.9. Important notes 1.9.1. The javax to jakarta Package Namespace Change The Java EE move to the Eclipse Foundation and the establishment of Jakarta EE, since Jakarta EE 9, packages used for all EE APIs have changed to jakarta.* Code snippets in documentation have been updated to use the jakarta.* namespace, but you of course need to take care and review your own applications. Note This change does not affect javax packages that are part of Java SE. When migrating applications to EE 10, you need to: Update any import statements or other source code uses of EE API classes from the javax package to jakarta . Change any EE-specified system properties or other configuration properties whose names begin with javax. to begin with jakarta. . Use the META-INF/services/jakarta.[rest_of_name] name format to identify implementation classes in your applications that use the implement EE interfaces or abstract classes bootstrapped with the java.util.ServiceLoader mechanism. 1.9.1.1. Migration tools Source code migration: How to use Red Hat Migration Toolkit for Auto-Migration of an Application to the Jakarta EE 10 Namespace Bytecode transforms: For cases where source code migration is not an option, the open source Eclipse Transformer Additional resources Background: Update on Jakarta EE Rights to Java Trademarks Red Hat Customer Portal: Red Hat JBoss EAP Application Migration from Jakarta EE 8 to EE 10 Jakarta EE: Javax to Jakarta Namespace Ecosystem Progress 1.9.2. Support for IBM Power and IBM Z Red Hat build of Apache Camel for Quarkus is now supported on IBM Power and IBM Z. 1.9.3. Minimum Java version - JDK 17 Red Hat build of Apache Camel for Quarkus version 3.8 requires JDK 17 or newer. 1.9.4. Support for OpenJDK Red Hat build of Apache Camel for Quarkus version 3.8 includes support for OpenJDK 21. 1.9.5. Support for AdoptiumJDK Red Hat build of Apache Camel for Quarkus version 3.8 includes support for AdoptiumJDK 17 and AdoptiumJDK 21. 1.9.6. Upgrades 1.9.7. Camel upgraded from version 4.0 to version 4.4 Red Hat build of Apache Camel for Quarkus version 3.8 has been upgraded from Camel version 4.0 to Camel version 4.4. For additional information about each intervening Camel patch release, refer to the following: Apache Camel 4.0.1 Release Notes Apache Camel 4.0.2 Release Notes Apache Camel 4.0.3 Release Notes Apache Camel 4.0.4 Release Notes Apache Camel 4.0.5 Release Notes Apache Camel 4.1.0 Release Notes Apache Camel 4.2.0 Release Notes Apache Camel 4.3.0 Release Notes Apache Camel 4.4.0 Release Notes 1.9.8. Camel Quarkus upgraded from version 3.2 to version 3.8 Red Hat build of Apache Camel for Quarkus version 3.8 has been upgraded from Camel Quarkus version 3.2 to Camel Quarkus version 3.8. For additional information about each intervening Camel Quarkus patch release, refer to the following: Apache Camel Quarkus 3.2.1 Release Notes Apache Camel Quarkus 3.2.2 Release Notes Apache Camel Quarkus 3.2.3 Release Notes Apache Camel Quarkus 3.4.0 Release Notes Apache Camel Quarkus 3.5.0 Release Notes Apache Camel Quarkus 3.6.0 Release Notes Apache Camel Quarkus 3.7.0 Release Notes Apache Camel Quarkus 3.8.0 Release Notes 1.10. Resolved issues The following lists shows known issues that were affecting Red Hat build of Apache Camel for Quarkus, which have been fixed in Red Hat build of Apache Camel for Quarkus version 3.8. 1.10.1. Resolved issues in Quarkus CXF 3.8.6 1.10.1.1. Passing multiple namespace mappings via quarkus.cxf.codegen.wsdl2java.package-names Before Quarkus CXF 3.8.6, the values specified in quarkus.cxf.codegen.wsdl2java.package-names quarkus.cxf.codegen.wsdl2java.package-names were wrongly passed as a single comma-separated value of the -p option, leading to BadUsageException: -p has invalid character! . Since Quarkus CXF 3.8.6, if quarkus.cxf.codegen.wsdl2java.package-names specifies multiple mappings, such as application.properties quarkus.cxf.codegen.wsdl2java.package-names = http://www.example.org/add=io.quarkiverse.cxf.wsdl2java.it.add, http://www.example.org/multiply=io.quarkiverse.cxf.wsdl2java.it.multiply then they are properly passed to wsdl2java as multiple -p options: wsdl2java \ -p http://www.example.org/add=io.quarkiverse.cxf.wsdl2java.it.add \ -p http://www.example.org/multiply=io.quarkiverse.cxf.wsdl2java.it.multiply \ ... 1.10.1.2. Beans not injected into a service implementation with @CXFEndpoint("/my-path") When backporting fixes from the main branch of Quarkus CXF to 3.8, we mistakenly ported a new feature allowing to specify CXF service endpoint paths through the new annotation @CXFEndpoint("/myPath"). The new code did not work well for service implementation classes having both @WebService and @CXFEndpoint annotations. In those cases, if the service has some @Inject fields, those fields ere left blank and the service call throws a NullPointerException . Service implementations not using the new @CXFEndpoint annotations are unaffected. The code introducing the new @CXFEndpoint("/myPath") functionality was removed in this release. We recommend that you continue to specify service endpoint paths in application.properties , for example: Example quarkus.cxf.endpoint."/myPath".implementor = org.acme.MyServiceImpl 1.10.2. Resolved issues in Red Hat build of Apache Camel for Quarkus 4.4 CEQ-8857 Camel-http producer sets "Content-Encoding=UTF-8" 1.10.3. releases For details of issues resolved between Camel Quarkus 3.2 and Camel Quarkus 3.8, see the Release Notes for each patch release. 1.11. Deprecated features in Red Hat build of Apache Camel for Quarkus version 3.8 The following capabilities are not available in the major release of Red Hat build of Apache Camel for Quarkus, and are deprecated in this release. 1.11.1. Openapi-java support for Openapi v2 Deprecated features OpenApi V2 is deprecated in 3.8, due to dropped support in Openapi-java with Camel 4.5.x. 1.12. Extensions removed in Red Hat build of Apache Camel for Quarkus version 3.8 No extensions are removed in the Red Hat build of Apache Camel for Quarkus version 3.8 release. 1.13. Extensions added in Red Hat build of Apache Camel for Quarkus version 3.8 The following table lists the extensions added in the Red Hat build of Apache Camel for Quarkus version 3.8 release . Table 1.1. Added extensions Extension Artifact Description Jasypt camel-quarkus-jasypt Security using Jasypt JSON Path camel-quarkus-jsonpath Evaluate a JSONPath expression against a JSON message body JT400 camel-quarkus-jt400 Exchanges messages with an IBM i system using data queues, message queues, or program call. IBM i is the replacement for AS/400 and iSeries servers. Kudu camel-quarkus-kudu Interact with Apache Kudu, a free and open source column-oriented data store of the Apache Hadoop ecosystem. LRA camel-quarkus-lra Camel saga binding for Long-Running-Action framework. Saga camel-quarkus-saga Execute custom actions within a route using the Saga EIP. Splunk HEC camel-quarkus-splunk-hec The splunk component allows to publish events in Splunk using the HTTP Event Collector. XJ camel-quarkus-xj Transform JSON and XML message using a XSLT. 1.14. Extensions with changed support in Red Hat build of Apache Camel for Quarkus version 3.8 No extensions have changed support levels in the Red Hat build of Apache Camel for Quarkus version 3.8 release. Note For information about support levels, see Red Hat build of Apache Camel for Quarkus Extensions 1.15. Data formats added in Red Hat build of Apache Camel for Quarkus version 3.8 No data formats have been added in the Red Hat build of Apache Camel for Quarkus version 3.8 release.. 1.16. Additional resources Supported Configurations Red Hat build of Apache Camel for Quarkus Extensions Getting Started with Red Hat build of Apache Camel for Quarkus Developing Applications with Red Hat build of Apache Camel for Quarkus | [
"Caused by: java.lang.NoClassDefFoundError: io/netty/handler/codec/socksx/v5/Socks5InitialRequest at java.base/jdk.internal.misc.Unsafe.ensureClassInitialized0(Native Method) at java.base/jdk.internal.misc.Unsafe.ensureClassInitialized(Unsafe.java:1160) at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.classinitialization.ClassInitializationSupport.ensureClassInitialized(ClassInitializationSupport.java:177) ... 55 more Caused by: java.lang.ClassNotFoundException: io.netty.handler.codec.socksx.v5.Socks5InitialRequest at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) at java.base/jdk.internal.loader.ClassLoadersUSDAppClassLoader.loadClass(ClassLoaders.java:188)",
"quarkus.cxf.endpoint.\"/myPath\".implementor = org.acme.MyServiceImpl",
"quarkus.cxf.codegen.wsdl2java.package-names = http://www.example.org/add=io.quarkiverse.cxf.wsdl2java.it.add, http://www.example.org/multiply=io.quarkiverse.cxf.wsdl2java.it.multiply",
"wsdl2java -p http://www.example.org/add=io.quarkiverse.cxf.wsdl2java.it.add -p http://www.example.org/multiply=io.quarkiverse.cxf.wsdl2java.it.multiply",
"quarkus.cxf.endpoint.\"/myPath\".implementor = org.acme.MyServiceImpl"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/release_notes_for_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-relnotes_ceq |
1.3. Verify Downloaded Files | 1.3. Verify Downloaded Files Procedure 1.2. Verify File Checksums on Red Hat Enterprise Linux Obtain checksum values for the downloaded file Go to https://access.redhat.com/jbossnetwork/ . Log in if required. Select your Product and Version . Select the packages you want to verify. Once you have chosen them, navigate to the Software Details page. Take note of the MD5 and SHA-256 checksum values. Run a checksum tool on the file Navigate to the directory containing the downloaded file in a terminal window. Run md5sum downloaded_file . Run shasum downloaded_file . Example output: Compare the checksum values returned by the md5sum and shasum commands with the corresponding values displayed on the Software Details page. Download the file again if the two checksum values are not identical. A difference between the checksum values indicates that the file has either been corrupted during download or has been modified since it was uploaded to the server. Contact Red Hat Support for assistance if after several downloads the checksum does not successfully validate. Note No checksum tool is included with Microsoft Windows. Download a third-party MD5 application such as MD5 Summer from http://www.md5summer.org/ . | [
"[localhost]USD md5sum jboss-dv-installer-[VERSION]-redhat-[VERSION].jar MD5 (jboss-dv-installer-[VERSION]-redhat-[VERSION].jar) = 0d1e72a6b038d8bd27ed22b196e5887f [localhost]USD shasum jboss-dv-installer-[VERSION]-redhat-[VERSION].jar a74841391bd243d2ca29f31cd9f190f3f1bdc02d jboss-dv-installer-[VERSION]-redhat-[VERSION].jar"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/verify_downloaded_files |
Chapter 13. Changing the MTU for the cluster network | Chapter 13. Changing the MTU for the cluster network As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change. You can change the MTU only for clusters that use the OVN-Kubernetes plugin or the OpenShift SDN network plugin. 13.1. About the cluster MTU During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not usually need to override the detected MTU. You might want to change the MTU of the cluster network for several reasons: The MTU detected during cluster installation is not correct for your infrastructure. Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance. 13.1.1. Service interruption considerations When you initiate an MTU change on your cluster the following effects might impact service availability: At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart. Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change. 13.1.2. MTU value selection When planning your MTU migration there are two related but distinct MTU values to consider. Hardware MTU : This MTU value is set based on the specifics of your network infrastructure. Cluster network MTU : This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin. For OVN-Kubernetes, the overhead is 100 bytes. For OpenShift SDN, the overhead is 50 bytes. If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . Important To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value ( maxmtu ) that is accepted by the network interface by using the ip -d link command. 13.1.3. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 13.1. Live migration of the cluster MTU User-initiated steps OpenShift Container Platform activity Set the following values in the Cluster Network Operator configuration: spec.migration.mtu.machine.to spec.migration.mtu.network.from spec.migration.mtu.network.to Cluster Network Operator (CNO) : Confirms that each field is set to a valid value. The mtu.machine.to must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line. The mtu.network.from field must equal the network.status.clusterNetworkMTU field, which is the current MTU of the cluster network. The mtu.network.to field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plugin. The overhead for OVN-Kubernetes is 100 bytes and for OpenShift SDN is 50 bytes. If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the mtu.network.to field. Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster. Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including: Deploying a new NetworkManager connection profile with the MTU change Changing the MTU through a DHCP server setting Changing the MTU through boot parameters N/A Set the mtu value in the CNO configuration for the network plugin and set spec.migration to null . Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster with the new MTU configuration. 13.2. Changing the cluster network MTU As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster. Important The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect. The following procedure describes how to change the cluster network MTU by using either machine configs, Dynamic Host Configuration Protocol (DHCP), or an ISO image. If you use either the DHCP or ISO approaches, you must refer to configuration artifacts that you kept after installing your cluster to complete the procedure. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. You have identified the target MTU for your cluster. The MTU for the OVN-Kubernetes network plugin must be set to 100 less than the lowest hardware MTU value in your cluster. The MTU for the OpenShift SDN network plugin must be set to 50 less than the lowest hardware MTU value in your cluster. Procedure To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23 ... Prepare your configuration for the hardware MTU: If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration: dhcp-option-force=26,<mtu> where: <mtu> Specifies the hardware MTU for the DHCP server to advertise. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for OpenShift Container Platform if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified. Find the primary network interface: If you are using the OpenShift SDN network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }' where: <node_name> Specifies the name of a node in your cluster. If you are using the OVN-Kubernetes network plugin, enter the following command: USD oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 where: <node_name> Specifies the name of a node in your cluster. Create the following NetworkManager configuration in the <interface>-mtu.conf file: Example NetworkManager connection configuration [connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu> where: <mtu> Specifies the new hardware MTU value. <interface> Specifies the primary network interface name. Create two MachineConfig objects, one for the control plane nodes and another for the worker nodes in your cluster: Create the following Butane config in the control-plane-interface.bu file: variant: openshift version: 4.15.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create the following Butane config in the worker-interface.bu file: variant: openshift version: 4.15.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600 1 Specify the NetworkManager connection name for the primary network interface. 2 Specify the local filename for the updated NetworkManager configuration file from the step. Create MachineConfig objects from the Butane configs by running the following command: USD for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done Warning Do not apply these machine configs until explicitly instructed later in this procedure. Applying these machine configs now causes a loss of stability for the cluster. To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value of <machine_to> . For OVN-Kubernetes, this value must be 100 less than the value of <machine_to> . For OpenShift SDN, this value must be 50 less than the value of <machine_to> . <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that increases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }' As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh Update the underlying network interface MTU value: If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster. USD for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure. As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep path: where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. If the machine config is successfully deployed, the output contains the /etc/NetworkManager/conf.d/99-<interface>-mtu.conf file path and the ExecStart=/usr/local/bin/mtu-migration.sh line. Finalize the MTU migration for your plugin. In both example commands, <mtu> specifies the new cluster network MTU that you specified with <overlay_to> . To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' To finalize the MTU migration, enter the following command for the OpenShift SDN network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "openshiftSDNConfig": { "mtu": <mtu> }}}}' After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification To get the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Get the current MTU for the primary network interface of a node: To list the nodes in your cluster, enter the following command: USD oc get nodes To obtain the current MTU setting for the primary network interface on a node, enter the following command: USD oc debug node/<node> -- chroot /host ip address show <interface> where: <node> Specifies a node from the output from the step. <interface> Specifies the primary network interface name for the node. Example output ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051 13.3. Additional resources Using advanced networking options for PXE and ISO installations Manually creating NetworkManager profiles in key file format Configuring a dynamic Ethernet connection using nmcli | [
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23",
"dhcp-option-force=26,<mtu>",
"oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }'",
"oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0",
"[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>",
"variant: openshift version: 4.15.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"variant: openshift version: 4.15.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'",
"oc get machineconfigpools",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done",
"oc get machineconfigpools",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep path:",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'",
"oc get machineconfigpools",
"oc describe network.config cluster",
"oc get nodes",
"oc debug node/<node> -- chroot /host ip address show <interface>",
"ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/changing-cluster-network-mtu |
Chapter 34. Top Applications analytics graph | Chapter 34. Top Applications analytics graph The Top Applications analytics graph provides developers with a tool for optimizing API performance and gaining insights into usage patterns. Developers can identify high-performing applications, understand real-time metrics, and make data-driven decisions to enhance the overall API experience. The Top Applications analytics graph in Products > [Your_product_name] > Analytics > Top Applications shows data during periods for the time zone configured in the 3scale Admin Portal account. Top applications are retrieved based on fixed periods: The last 24 hours The last 7 days The last 30 days Date ranges, for example: from 09/27/2023 until 10/25/2023 If you search, for example using 05/17/2023 , the resulting graph shows the usage from 01/01/2023 to 12/31/2023 . This is the the calendar range which most closely fits the calendar period chosen by the user and the graph displayed is according to that calendar period. When checking Top Applications analytics there are 2 things to bear in mind: Top applications are retrieved based on fixed calendar period (day/week/month/year). The period is shown under the chart. Note: Between midnight 04/30/2023 and midnight 05/31/2023 means: from 2023/05/01 00:00:00 inclusive to 2023/06/01 00:00:00 exclusive, that is, until 2023/05/31 23:59:59 inclusive. The actual usage for the applications shown in the statistics are based on the user-selected period, not the calendar period. Therefore, sometimes there might be a situation where an application is displayed in the top applications view because it has a high usage in the calendar period, but has 0 usage for the actual selected period by the user. Example The user enters a date range using the calendar: from 10/25/2023 until 10/27/2022 . Based on that date range, a period is selected that matches the user-selected date range as closely as possible. This must comprise a single granularity, for example, 1 calendar day, 1 calendar month. The data store is queried for the top applications in that period: 10/25/2023 to 10/27/2023 . For the list of top applications returned, the usage is displayed using the user's original date range entered: 10/25/2023 to 10/27/2023 . | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/top-applications-analytics-graph_analytics-for-threescale-apis |
Subsets and Splits