title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 1.0-22 Thu May 23 2019 Jiri Herrmann Version for 7.7 Beta publication Revision 1.0-21 Thu Oct 25 2018 Jiri Herrmann Version for 7.6 GA publication Revision 1.0-21 Thu Aug 14 2018 Jiri Herrmann Version for 7.6 Beta publication Revision 1.0-20 Thu Apr 5 2018 Jiri Herrmann Version for 7.5 GA publication Revision 1.0-18 Thu Jul 27 2017 Jiri Herrmann Version for 7.4 GA publication Revision 1.0-15 Mon Oct 17 2016 Jiri Herrmann Version for 7.3 GA publication Revision 1.0-9 Thu Oct 08 2015 Jiri Herrmann Cleaned up the Revision History Revision 1.0-8 Wed Feb 18 2015 Scott Radvan Version for 7.1 GA release.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/appe-virtualization_security_guide-revision_history
Installing on AWS
Installing on AWS OpenShift Container Platform 4.14 Installing OpenShift Container Platform on Amazon Web Services Red Hat OpenShift Documentation Team
[ "platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2", "compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole", "controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{\"auths\": ...}' 20", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{\"auths\": ...}' 21", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "subnets: - subnet-1 - subnet-2 - subnet-3", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\": ...}' 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-gov-west-1a - us-gov-west-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-gov-west-1c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-gov-west-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-iso-east-1a - us-iso-east-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-iso-east-1a - us-iso-east-1b replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-iso-east-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24 additionalTrustBundle: | 25 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-localzone compute: - name: edge platform: aws: type: m5.4xlarge platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-localzone compute: - name: edge platform: aws: rootVolume: type: gp3 size: 120 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "apiVersion: v1 baseDomain: devcluster.openshift.com metadata: name: ipi-localzone compute: - name: edge platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 platform: aws: region: us-west-2 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones", "aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in", "platform: aws: region: <region_name> 1 compute: - name: edge platform: aws: zones: 2 - <local_zone_name> #", "apiVersion: v1 baseDomain: example.com metadata: name: cluster-name platform: aws: region: us-west-2 compute: - name: edge platform: aws: zones: - us-west-2-lax-1a - us-west-2-lax-1b - us-west-2-las-1a pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...' #", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"3\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:123456789012:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "[ { \"ParameterKey\": \"VpcId\", \"ParameterValue\": \"<value_of_VpcId>\" 1 }, { \"ParameterKey\": \"PublicRouteTableId\", \"ParameterValue\": \"<value_of_PublicRouteTableId>\" 2 }, { \"ParameterKey\": \"ZoneName\", \"ParameterValue\": \"<value_of_ZoneName>\" 3 }, { \"ParameterKey\": \"SubnetName\", \"ParameterValue\": \"<value_of_SubnetName>\" }, { \"ParameterKey\": \"PublicSubnetCidr\", \"ParameterValue\": \"10.0.192.0/20\" 4 } ]", "aws cloudformation create-stack --stack-name <subnet_stack_name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:123456789012:stack/<subnet_stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <subnet_stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "CloudFormation template used to create Local Zone subnets and dependencies AWSTemplateFormatVersion: 2010-09-09 Description: Template for create Public Local Zone subnets Parameters: VpcId: Description: VPC Id Type: String ZoneName: Description: Local Zone Name (Example us-east-1-nyc-1a) Type: String SubnetName: Description: Local Zone Name (Example cluster-public-us-east-1-nyc-1a) Type: String PublicRouteTableId: Description: Public Route Table ID to associate the Local Zone subnet Type: String PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for Public Subnet Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Ref SubnetName - Key: kubernetes.io/cluster/unmanaged Value: \"true\" PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId Outputs: PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]]", "platform: aws: region: us-west-2 subnets: 1 - publicSubnetId-1 - publicSubnetId-2 - publicSubnetId-3 - privateSubnetId-1 - privateSubnetId-2 - privateSubnetId-3 - publicSubnetId-LocalZone-1", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE cluster-7xw5g-edge-us-east-1-nyc-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1a 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1b 1 1 1 1 3h4m cluster-7xw5g-worker-us-east-1c 1 1 1 1 3h4m", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE cluster-7xw5g-edge-us-east-1-nyc-1a-wbclh Running c5d.2xlarge us-east-1 us-east-1-nyc-1a 3h cluster-7xw5g-master-0 Running m6i.xlarge us-east-1 us-east-1a 3h4m cluster-7xw5g-master-1 Running m6i.xlarge us-east-1 us-east-1b 3h4m cluster-7xw5g-master-2 Running m6i.xlarge us-east-1 us-east-1c 3h4m cluster-7xw5g-worker-us-east-1a-rtp45 Running m6i.xlarge us-east-1 us-east-1a 3h cluster-7xw5g-worker-us-east-1b-glm7c Running m6i.xlarge us-east-1 us-east-1b 3h cluster-7xw5g-worker-us-east-1c-qfvz4 Running m6i.xlarge us-east-1 us-east-1c 3h", "oc get nodes -l node-role.kubernetes.io/edge", "NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the worker nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The worker security group ID to associate with worker nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with worker nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "aws outposts get-outpost-instance-types --outpost-id <outpost_id> 1", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: {} replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: aws: type: m5.large 8 zones: - us-east-1a 9 rootVolume: type: gp2 10 size: 120 replicas: 3 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 13 propagateUserTags: true 14 userTags: adminContact: jdoe costCenter: 7536 subnets: 15 - subnet-1 - subnet-2 - subnet-3 sshKey: ssh-ed25519 AAAA... 16 pullSecret: '{\"auths\": ...}' 17", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir <installation_-_directory>", "INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift", "tree . ├── manifests │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_cloud-creds-secret.yaml ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-machines-0.yaml ├── 99_openshift-cluster-api_master-machines-1.yaml ├── 99_openshift-cluster-api_master-machines-2.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-machineset-0.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml ├── 99_role-cloud-creds-secret-reader.yaml └── openshift-install-manifests.yaml", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: mtu: 1250", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: mtu: 1200", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "oc annotate --overwrite storageclass gp3-csi storageclass.kubernetes.io/is-default-class=false oc annotate --overwrite storageclass gp2-csi storageclass.kubernetes.io/is-default-class=true", "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl aws delete --name=<name> \\ 1 --region=<aws_region> 2", "2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted", "./openshift-install destroy cluster --dir <installation_directory> \\ 1 --log-level=debug 2", "aws cloudformation delete-stack --stack-name <local_zone_stack_name>", "aws cloudformation delete-stack --stack-name <vpc_stack_name>", "aws cloudformation describe-stacks --stack-name <local_zone_stack_name>", "aws cloudformation describe-stacks --stack-name <vpc_stack_name>", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "platform: aws: lbType:", "publish:", "sshKey:", "compute: platform: aws: amiID:", "compute: platform: aws: iamRole:", "compute: platform: aws: rootVolume: iops:", "compute: platform: aws: rootVolume: size:", "compute: platform: aws: rootVolume: type:", "compute: platform: aws: rootVolume: kmsKeyARN:", "compute: platform: aws: type:", "compute: platform: aws: zones:", "compute: aws: region:", "aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge", "controlPlane: platform: aws: amiID:", "controlPlane: platform: aws: iamRole:", "controlPlane: platform: aws: rootVolume: iops:", "controlPlane: platform: aws: rootVolume: size:", "controlPlane: platform: aws: rootVolume: type:", "controlPlane: platform: aws: rootVolume: kmsKeyARN:", "controlPlane: platform: aws: type:", "controlPlane: platform: aws: zones:", "controlPlane: aws: region:", "platform: aws: amiID:", "platform: aws: hostedZone:", "platform: aws: hostedZoneRole:", "platform: aws: serviceEndpoints: - name: url:", "platform: aws: userTags:", "platform: aws: propagateUserTags:", "platform: aws: subnets:", "platform: aws: preserveBootstrapIgnition:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installing_on_aws/index
1.5.4. Applying the Changes
1.5.4. Applying the Changes After downloading and installing security errata and updates, it is important to halt usage of the older software and begin using the new software. How this is done depends on the type of software that has been updated. The following list itemizes the general categories of software and provides instructions for using the updated versions after a package upgrade. Note In general, rebooting the system is the surest way to ensure that the latest version of a software package is used; however, this option is not always required, or available to the system administrator. Applications User-space applications are any programs that can be initiated by a system user. Typically, such applications are used only when a user, script, or automated task utility launches them and they do not persist for long periods of time. Once such a user-space application is updated, halt any instances of the application on the system and launch the program again to use the updated version. Kernel The kernel is the core software component for the Red Hat Enterprise Linux operating system. It manages access to memory, the processor, and peripherals as well as schedules all tasks. Because of its central role, the kernel cannot be restarted without also stopping the computer. Therefore, an updated version of the kernel cannot be used until the system is rebooted. Shared Libraries Shared libraries are units of code, such as glibc , which are used by a number of applications and services. Applications utilizing a shared library typically load the shared code when the application is initialized, so any applications using the updated library must be halted and relaunched. To determine which running applications link against a particular library, use the lsof command: lsof <path> For example, to determine which running applications link against the libwrap.so library, type: This command returns a list of all the running programs which use TCP wrappers for host access control. Therefore, any program listed must be halted and relaunched if the tcp_wrappers package is updated. SysV Services SysV services are persistent server programs launched during the boot process. Examples of SysV services include sshd , vsftpd , and xinetd . Because these programs usually persist in memory as long as the machine is booted, each updated SysV service must be halted and relaunched after the package is upgraded. This can be done using the Services Configuration Tool or by logging into a root shell prompt and issuing the /sbin/service command: /sbin/service <service-name> restart Replace <service-name> with the name of the service, such as sshd . xinetd Services Services controlled by the xinetd super service only run when a there is an active connection. Examples of services controlled by xinetd include Telnet, IMAP, and POP3. Because new instances of these services are launched by xinetd each time a new request is received, connections that occur after an upgrade are handled by the updated software. However, if there are active connections at the time the xinetd controlled service is upgraded, they are serviced by the older version of the software. To kill off older instances of a particular xinetd controlled service, upgrade the package for the service then halt all processes currently running. To determine if the process is running, use the ps or pgrep command and then use the kill or killall command to halt current instances of the service. For example, if security errata imap packages are released, upgrade the packages, then type the following command as root into a shell prompt: This command returns all active IMAP sessions. Individual sessions can then be terminated by issuing the following command as root: kill <PID> If this fails to terminate the session, use the following command instead: kill -9 <PID> In the examples, replace <PID> with the process identification number (found in the second column of the pgrep -l command) for an IMAP session. To kill all active IMAP sessions, issue the following command:
[ "~]# lsof /lib64/libwrap.so* COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sshd 13600 root mem REG 253,0 43256 400501 /lib64/libwrap.so.0.7.6 sshd 13603 juan mem REG 253,0 43256 400501 /lib64/libwrap.so.0.7.6 gnome-set 14898 juan mem REG 253,0 43256 400501 /lib64/libwrap.so.0.7.6 metacity 14925 juan mem REG 253,0 43256 400501 /lib64/libwrap.so.0.7.6 [output truncated]", "~]# pgrep -l imap 1439 imapd 1788 imapd 1793 imapd", "~]# killall imapd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Updating_Packages-Applying_the_Changes
11.2. Runtime Metadata Updates
11.2. Runtime Metadata Updates Runtime updates via system procedures and DDL statements are by default ephemeral. They are effective across the cluster only for the currently running VDBs. With the VDB start the values will revert to whatever is stored in the VDB. Updates may be made persistent by configuring an org.teiid.metadata.MetadataRepository . An instance of a MetadataRepository can be installed via the VDB file. In Designer based VDB, you can edit the vdb.xml file in the META-INF directory or use Dynamic VDB file as below. <vdb name="{vdb-name}" version="1"> <model name="{model-name}" type="VIRTUAL"> <metadata type="{jboss-as-module-name}"></metadata> </model> </vdb> In the above code fragment, replace the {jboss-as-module-name} with a JBoss EAP module name that has library that implements the org.teiid.metadata.MetadataRepository interface and defines file "META-INF/services/org.teiid.metadata.MetadataRepository" with name of the implementation file. The MetadataRepository repository instance may fully implement as many of the methods as needed and return null from any unneeded getter. Note It is not recommended to directly manipulate org.teiid.metadata.AbstractMetadataRecord instances. System procedures and DDL statements should be used instead since the effects will be distributed through the cluster and will not introduce inconsistencies. org.teiid.metadata.AbstractMetadataRecord objects passed to the MetadataRepository have not yet been modified. If the MetadataRepository cannot persist the update, then a RuntimeException should be thrown to prevent the update from being applied by the runtime engine. Note The MetadataRepository can be accessed by multiple threads both during load (if using dynamic VDBs) or at runtime with DDL statements. Your implementation should handle any needed synchronization.
[ "<vdb name=\"{vdb-name}\" version=\"1\"> <model name=\"{model-name}\" type=\"VIRTUAL\"> <metadata type=\"{jboss-as-module-name}\"></metadata> </model> </vdb>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/runtime_metadata_updates
8.13. Configure teamd Runners
8.13. Configure teamd Runners Runners are units of code which are compiled into the Team daemon when an instance of the daemon is created. For an introduction to the teamd runners, see Section 8.4, "Understanding the Network Teaming Daemon and the "Runners"" . 8.13.1. Configure the broadcast Runner To configure the broadcast runner, using an editor as root , add the following to the team JSON format configuration file: Please see the teamd.conf(5) man page for more information. 8.13.2. Configure the random Runner The random runner behaves similarly to the round-robin runner. To configure the random runner, using an editor as root , add the following to the team JSON format configuration file: Please see the teamd.conf(5) man page for more information. 8.13.3. Configure the Round-robin Runner To configure the round-robin runner, using an editor as root , add the following to the team JSON format configuration file: A very basic configuration for round-robin. Please see the teamd.conf(5) man page for more information. 8.13.4. Configure the activebackup Runner The active backup runner can use all of the link-watchers to determine the status of links in a team. Any one of the following examples can be added to the team JSON format configuration file: This example configuration uses the active-backup runner with ethtool as the link watcher. Port em2 has higher priority. The sticky flag ensures that if em1 becomes active, it stays active as long as the link remains up. This example configuration adds a queue ID of 4 . It uses active-backup runner with ethtool as the link watcher. Port em2 has higher priority. But the sticky flag ensures that if em1 becomes active, it will stay active as long as the link remains up. To configure the activebackup runner using ethtool as the link watcher and applying a delay, using an editor as root , add the following to the team JSON format configuration file: This example configuration uses the active-backup runner with ethtool as the link watcher. Port em2 has higher priority. But the sticky flag ensures that if em1 becomes active, it stays active while the link remains up. Link changes are not propagated to the runner immediately, but delays are applied. Please see the teamd.conf(5) man page for more information. 8.13.5. Configure the loadbalance Runner This runner can be used for two types of load balancing, active and passive. In active mode, constant re-balancing of traffic is done by using statistics of recent traffic to share out traffic as evenly as possible. In passive mode, streams of traffic are distributed randomly across the available links. This has a speed advantage due to lower processing overhead. In high volume traffic applications this is often preferred as traffic usually consists of multiple stream which will be distributed randomly between the available links, in this way load sharing is accomplished without intervention by teamd . To configure the loadbalance runner for passive transmit (Tx) load balancing, using an editor as root , add the following to the team JSON format configuration file: Configuration for hash-based passive transmit (Tx) load balancing. To configure the loadbalance runner for active transmit (Tx) load balancing, using an editor as root , add the following to the team JSON format configuration file: Configuration for active transmit (Tx) load balancing using basic load balancer. Please see the teamd.conf(5) man page for more information. 8.13.6. Configure the LACP (802.3ad) Runner To configure the LACP runner using ethtool as a link watcher, using an editor as root , add the following to the team JSON format configuration file: Configuration for connection to a link aggregation control protocol ( LACP ) capable counterpart. The LACP runner should use ethtool to monitor the status of a link. Note that only ethtool can be used for link monitoring because, for example in the case of arp_ping , the link would never come up. The reason is that the link has to be established first and only after that can packets, ARP included, go through. Using ethtool prevents this because it monitors each link layer individually. Active load balancing is possible with this runner in the same way as it is done for the loadbalance runner. To enable active transmit (Tx) load balancing, add the following section: Please see the teamd.conf(5) man page for more information. 8.13.7. Configure Monitoring of the Link State The following methods of link state monitoring are available. To implement one of the methods, add the JSON format string to the team JSON format configuration file using an editor running with root privileges. 8.13.7.1. Configure Ethtool for link-state Monitoring To add or edit an existing delay, in milliseconds, between the link coming up and the runner being notified about it, add or edit a section as follows: To add or edit an existing delay, in milliseconds, between the link going down and the runner being notified about it, add or edit a section as follows: 8.13.7.2. Configure ARP Ping for Link-state Monitoring The team daemon teamd sends an ARP REQUEST to an address at the remote end of the link in order to determine if the link is up. The method used is the same as the arping utility but it does not use that utility. Prepare a file containing the new configuration in JSON format similar to the following example: This configuration uses arp_ping as the link watcher. The missed_max option is a limit value of the maximum allowed number of missed replies (ARP replies for example). It should be chosen in conjunction with the interval option in order to determine the total time before a link is reported as down. To load a new configuration for a team port em2 , from a file containing a JSON configuration, issue the following command as root : Note that the old configuration will be overwritten and that any options omitted will be reset to the default values. See the teamdctl(8) man page for more team daemon control tool command examples. 8.13.7.3. Configure IPv6 NA/NS for Link-state Monitoring To configure the interval between sending NS/NA packets, add or edit a section as follows: Value is positive number in milliseconds. It should be chosen in conjunction with the missed_max option in order to determine the total time before a link is reported as down. To configure the maximum number of missed NS/NA reply packets to allow before reporting the link as down, add or edit a section as follows: Maximum number of missed NS/NA reply packets. If this number is exceeded, the link is reported as down. The missed_max option is a limit value of the maximum allowed number of missed replies (ARP replies for example). It should be chosen in conjunction with the interval option in order to determine the total time before a link is reported as down. To configure the host name that is resolved to the IPv6 address target address for the NS/NA packets, add or edit a section as follows: The " target_host " option contains the host name to be converted to an IPv6 address which will be used as the target address for the NS/NA packets. An IPv6 address can be used in place of a host name. Please see the teamd.conf(5) man page for more information. 8.13.8. Configure Port Selection Override The physical port which transmits a frame is normally selected by the kernel part of the team driver, and is not relevant to the user or system administrator. The output port is selected using the policies of the selected team mode ( teamd runner). On occasion however, it is helpful to direct certain classes of outgoing traffic to certain physical interfaces to implement slightly more complex policies. By default the team driver is multiqueue aware and 16 queues are created when the driver initializes. If more or less queues are required, the Netlink attribute tx_queues can be used to change this value during the team driver instance creation. The queue ID for a port can be set by the port configuration option queue_id as follows: These queue ID's can be used in conjunction with the tc utility to configure a multiqueue queue discipline and filters to bias certain traffic to be transmitted on certain port devices. For example, if using the above configuration and wanting to force all traffic bound to 192.168.1.100 to use enp1s0 in the team as its output device, issue commands as root in the following format: This mechanism of overriding runner selection logic in order to bind traffic to a specific port can be used with all runners. 8.13.9. Configure BPF-based Tx Port Selectors The loadbalance and LACP runners uses hashes of packets to sort network traffic flow. The hash computation mechanism is based on the Berkeley Packet Filter ( BPF ) code. The BPF code is used to generate a hash rather than make a policy decision for outgoing packets. The hash length is 8 bits giving 256 variants. This means many different socket buffers ( SKB ) can have the same hash and therefore pass traffic over the same link. The use of a short hash is a quick way to sort traffic into different streams for the purposes of load balancing across multiple links. In static mode, the hash is only used to decide out of which port the traffic should be sent. In active mode, the runner will continually reassign hashes to different ports in an attempt to reach a perfect balance. The following fragment types or strings can be used for packet Tx hash computation: eth - Uses source and destination MAC addresses. vlan - Uses VLAN ID. ipv4 - Uses source and destination IPv4 addresses. ipv6 - Uses source and destination IPv6 addresses. ip - Uses source and destination IPv4 and IPv6 addresses. l3 - Uses source and destination IPv4 and IPv6 addresses. tcp - Uses source and destination TCP ports. udp - Uses source and destination UDP ports. sctp - Uses source and destination SCTP ports. l4 - Uses source and destination TCP and UDP and SCTP ports. These strings can be used by adding a line in the following format to the load balance runner: "tx_hash": ["eth", "ipv4", "ipv6"] See Section 8.13.5, "Configure the loadbalance Runner" for an example.
[ "{ \"device\": \"team0\", \"runner\": {\"name\": \"broadcast\"}, \"ports\": {\"em1\": {}, \"em2\": {}} }", "{ \"device\": \"team0\", \"runner\": {\"name\": \"random\"}, \"ports\": {\"em1\": {}, \"em2\": {}} }", "{ \"device\": \"team0\", \"runner\": {\"name\": \"roundrobin\"}, \"ports\": {\"em1\": {}, \"em2\": {}} }", "{ \"device\": \"team0\", \"runner\": { \"name\": \"activebackup\" }, \"link_watch\": { \"name\": \"ethtool\" }, \"ports\": { \"em1\": { \"prio\": -10, \"sticky\": true }, \"em2\": { \"prio\": 100 } } }", "{ \"device\": \"team0\", \"runner\": { \"name\": \"activebackup\" }, \"link_watch\": { \"name\": \"ethtool\" }, \"ports\": { \"em1\": { \"prio\": -10, \"sticky\": true, \"queue_id\": 4 }, \"em2\": { \"prio\": 100 } } }", "{ \"device\": \"team0\", \"runner\": { \"name\": \"activebackup\" }, \"link_watch\": { \"name\": \"ethtool\", \"delay_up\": 2500, \"delay_down\": 1000 }, \"ports\": { \"em1\": { \"prio\": -10, \"sticky\": true }, \"em2\": { \"prio\": 100 } } }", "{ \"device\": \"team0\", \"runner\": { \"name\": \"loadbalance\", \"tx_hash\": [\"eth\", \"ipv4\", \"ipv6\"] }, \"ports\": {\"em1\": {}, \"em2\": {}} }", "{ \"device\": \"team0\", \"runner\": { \"name\": \"loadbalance\", \"tx_hash\": [\"eth\", \"ipv4\", \"ipv6\"], \"tx_balancer\": { \"name\": \"basic\" } }, \"ports\": {\"em1\": {}, \"em2\": {}} }", "{ \"device\": \"team0\", \"runner\": { \"name\": \"lacp\", \"active\": true, \"fast_rate\": true, \"tx_hash\": [\"eth\", \"ipv4\", \"ipv6\"] }, \"link_watch\": {\"name\": \"ethtool\"}, \"ports\": {\"em1\": {}, \"em2\": {}} }", "\"tx_balancer\": { \"name\": \"basic\" }", "\"link_watch\": { \"name\": \"ethtool\", \"delay_up\": 2500 }", "\"link_watch\": { \"name\": \"ethtool\", \"delay_down\": 1000 }", "{ \"device\": \"team0\", \"runner\": {\"name\": \"activebackup\"}, \"link_watch\":{ \"name\": \"arp_ping\", \"interval\": 100, \"missed_max\": 30, \"source_host\": \"192.168.23.2\", \"target_host\": \"192.168.23.1\" }, \"ports\": { \"em1\": { \"prio\": -10, \"sticky\": true }, \"em2\": { \"prio\": 100 } } }", "~]# teamdctl port config update em2 JSON-config-file", "{ \"device\": \"team0\", \"runner\": {\"name\": \"activebackup\"}, \"link_watch\": { \"name\": \"nsna_ping\", \"interval\": 200, \"missed_max\": 15, \"target_host\": \"fe80::210:18ff:feaa:bbcc\" }, \"ports\": { \"em1\": { \"prio\": -10, \"sticky\": true }, \"em2\": { \"prio\": 100 } } }", "\"link_watch\": { \"name\": \"nsna_ping\", \"interval\": 200 }", "\"link_watch\": { \"name\": \"nsna_ping\", \"missed_max\": 15 }", "\"link_watch\": { \"name\": \"nsna_ping\", \"target_host\": \"MyStorage\" }", "{ \"queue_id\": 3 }", "~]# tc qdisc add dev team0 handle 1 root multiq ~]# tc filter add dev team0 protocol ip parent 1: prio 1 u32 match ip dst 192.168.1.100 action skbedit queue_mapping 3" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configure_teamd_runners
Chapter 4. IdM API example scenarios
Chapter 4. IdM API example scenarios The following examples provide you with the common scenarios of using IdM API commands. 4.1. Managing users with IdM API commands The examples below show common scenarios of how you can manage IdM users with the IdM API commands. Examples of managing IdM users with IdM API commands Creating an IdM user In this example, you create an IdM user with the username exampleuser and the supported user one-time password (OTP) authentication. Showing an IdM user information In this example, you display all available information about the IdM user exampleuser . Modifying an IdM user In this example, you change the e-mail address for the IdM user exampleuser . Searching for an IdM user In this example, you search for all IdM users that match exampleuser in the IdM group admins . Deleting an IdM user In this example, you delete the IdM user exampleuser . To restore the user in future, use the preserve option. If you use this option, you can restore the user with the user_undel command. Adding and removing a certificate for an IdM user You can add or remove Base64 encoded certificate for a user with the user_add_cert and user_remove_cert commands. In this example, you add a certificate for a user exampleuser . Enabling and disabling an IdM user You can enable or disable an IdM user with the user_enable and user_disable commands. In this example, you disable the IdM user exampleuser . 4.2. Managing groups with IdM API commands The examples below show common scenarios of how you can manage IdM groups with the IdM API commands. Examples of managing IdM users with IdM API commands Creating an IdM group In this example, you create an IdM group developers , with a specified Group ID number. Adding a user as a member to an IdM group In this example, you add the admin user to the developers group. Adding a service as a member to an IdM group In this example, you add the HTTP/server.ipa.test service to the developers group. Adding a group as a subgroup to an IdM group In this example, you add another group, admins , to the developers group. Adding IdM group managers In this example, you add the bob user as a group manager for the developers group. Finding an IdM group You can search for an IdM group using various parameters. In this example, you find all groups that the user bob is managing. Displaying IdM group information In this example, you display group information about the developers group, without the members list. Modifying an IdM group In this example, you convert a non-POSIX group testgroup to a POSIX group. Removing members from an IdM group In this example, you remove the admin user from the developers group. Removing IdM group managers In this example, you remove the user bob as a manager from the developers group. Removing an IdM group In this example, you remove the developers group. 4.3. Managing access control with IdM API commands The examples below show common scenarios of how you can manage access control with the IdM API commands. Examples of managing access control with IdM API commands Adding a permission for creating users In this example, you add a permission for creating users. Adding a permission for managing group membership In this example, you add a permission for adding users to groups. Adding a privilege for the user creation process In this example, you add a privilege for creating users, adding them to groups, and managing user certificates. Adding a role using a privilege In this example, you add a role using the privilege created in the example. Assigning a role to a user In this example, you assign the usermanager role to the user bob . Assigning a role to a group In this example, you assign the usermanager role to the managers group. 4.4. Managing sudo rules with IdM API commands The examples below show common scenarios of how you can manage sudo rules with the IdM API commands. Examples of managing sudo rules with IdM API commands Creating a sudo rule In this example, you create a sudo rule that holds time change commands. Creating a sudo command In this example, you create the date sudo command. Attaching a sudo command to a sudo rule In this example, you attach the date sudo command to the timechange sudo rule. Creating and attaching groups of sudo commands In this example, you create multiple sudo commands, add them to a newly created timecmds sudo command group, and attach the group to the timechange sudo rule. Denying sudo commands In this example, you deny the rm command to be run as sudo. Adding a user to a sudo rule In this example, you add the user bob to the timechange sudo rule. Making a sudo rule available only for a specified host In this example, you restrict the timechange rule to be available only for the client.ipa.test host. Setting sudo rules to be run as a different user By default, sudo rules are run as root . In this example, you set the timechange sudo rule to be run as the alice user instead. Setting sudo rules to be run as a group In this example, you set the timechange sudo rule to be run as the sysadmins group. Setting a sudo option for a sudo rule In this example, you set a sudo option for the timechange sudo rule. Enabling a sudo rule In this example, you enable the timechange sudo rule. Disabling a sudo rule In this example, you disable the timechange sudo rule. 4.5. Managing Host-based Access Control with IdM API commands The examples below show common scenarios of how you can manage Host-based Access Control (HBAC) with the IdM API commands. Examples of managing HBAC with IdM API commands Creating an HBAC rule In this example, you create a base rule that will handle SSH service access. Adding a user to an HBAC rule In this example, you add the user john to the sshd_rule HBAC rule. Adding a group to an HBAC rule In this example, you add the group developers to the sshd_rule HBAC rule. Removing a user from an HBAC rule In this example, you remove the user john from the sshd_rule HBAC rule. Registering a new target HBAC service You must register a target service before you can attach it to an HBAC rule. In this example, you register the chronyd service. Attaching a registered service to an HBAC rule In this example, you attach the sshd service to the sshd_rule HBAC rule. This service is registered in IPA by default, so there is no need to register it using hbacsvc_add beforehand. Adding a host to an HBAC rule In this example, you add workstations host group to the sshd_rule HBAC rule. Testing an HBAC rule In this example, you use the sshd_rule HBAC rule against the workstation.ipa.test host. It targets the service sshd that comes from the user john . Enabling an HBAC rule In this example, you enable the sshd_rule HBAC rule. Disabling an HBAC rule In this example, you disable the sshd_rule HBAC rule.
[ "api.Command.user_add(\"exampleuser\", givenname=\"Example\", sn=\"User\", ipauserauthtype=\"otp\")", "api.Command.user_show(\"exampleuser\", all=True)", "api.Command.user_mod(\"exampleuser\", mail=\"[email protected]\")", "api.Command.user_find(criteria=\"exampleuser\", in_group=\"admins\")", "api.Command.user_del(\"exampleuser\")", "args = [\"exampleuser\"] kw = { \"usercertificate\": \"\"\" MIICYzCCAcygAwIBAgIBADANBgkqhkiG9w0BAQUFADAuMQswCQYDVQQGEwJVUzEMMAoGA1UEC hMDSUJNMREwDwYDVQQLEwhMb2NhbCBDQTAeFw05OTEyMjIwNTAwMDBaFw0wMDEyMjMwNDU5NT laMC4xCzAJBgNVBAYTAlVTMQwwCgYDVQQKEwNJQk0xETAPBgNVBAsTCExvY2FsIENBMIGfMA0 GCSqGSIb3DQEBATOPA4GNADCBiQKBgQD2bZEo7xGaX2/0GHkrNFZvlxBou9v1Jmt/PDiTMPve 8r9FeJAQ0QdvFST/0JPQYD20rH0bimdDLgNdNynmyRoS2S/IInfpmf69iyc2G0TPyRvmHIiOZ bdCd+YBHQi1adkj17NDcWj6S14tVurFX73zx0sNoMS79q3tuXKrDsxeuwIDAQABo4GQMIGNME sGCVUdDwGG+EIBDQQ+EzxHZW5lcmF0ZWQgYnkgdGhlIFNlY3VyZVdheSBTZWN1cml0eSBTZXJ 2ZXIgZm9yIE9TLzM5MCAoUkFDRikwDgYDVR0PAQH/BAQDAgAGMA8GA1UdEwEB/wQFMAMBAf8w HQYDVR0OBBYEFJ3+ocRyCTJw067dLSwr/nalx6YMMA0GCSqGSIb3DQEBBQUAA4GBAMaQzt+za j1GU77yzlr8iiMBXgdQrwsZZWJo5exnAucJAEYQZmOfyLiMD6oYq+ZnfvM0n8G/Y79q8nhwvu xpYOnRSAXFp6xSkrIOeZtJMY1h00LKp/JX3Ng1svZ2agE126JHsQ0bhzN5TKsYfbwfTwfjdWA Gy6Vf1nYi/rO+ryMO \"\"\" } api.Command.user_add_cert(*args, **kw)", "api.Command.user_disable(\"exampleuser\")", "api.Command.group_add(\"developers\", gidnumber=500, description=\"Developers\")", "api.Command.group_add_member(\"developers\", user=\"admin\")", "api.Command.group_add_member(\"developers\", service=\"HTTP/server.ipa.test\")", "api.Command.group_add_member(\"developers\", group=\"admins\")", "api.Command.group_add_member_manager(\"developers\", user=\"bob\")", "api.Command.group_find(membermanager_user=\"bob\")", "api.Command.group_show(\"developers\", no_members=True)", "api.Command.group_mod(\"testgroup\", posix=True)", "api.Command.group_remove_member(\"developers\", user=\"admin\")", "api.Command.group_remove_member_manager(\"developers\", user=\"bob\")", "api.Command.group_del(\"developers\")", "api.Command.permission_add(\"Create users\", ipapermright='add', type='user')", "api.Command.permission_add(\"Manage group membership\", ipapermright='write', type='group', attrs=\"member\")", "api.Command.permission_add(\"Create users\", ipapermright='add', type='user') api.Command.permission_add(\"Manage group membership\", ipapermright='write', type='group', attrs=\"member\") api.Command.permission_add(\"Manage User certificates\", ipapermright='write', type='user', attrs='usercertificate') api.Command.privilege_add(\"User creation\") api.Command.privilege_add_permission(\"User creation\", permission=\"Create users\") api.Command.privilege_add_permission(\"User creation\", permission=\"Manage group membership\") api.Command.privilege_add_permission(\"User creation\", permission=\"Manage User certificates\")", "api.Command.role_add(\"usermanager\", description=\"Users manager\") api.Command.role_add_privilege(\"usermanager\", privilege=\"User creation\")", "api.Command.role_add_member(\"usermanager\", user=\"bob\")", "api.Command.role_add_member(\"usermanager\", group=\"managers\")", "api.Command.sudorule_add(\"timechange\")", "api.Command.sudocmd_add(\"/usr/bin/date\")", "api.Command.sudorule_add_allow_command(\"timechange\", sudocmd=\"/usr/bin/date\")", "api.Command.sudocmd_add(\"/usr/bin/date\") api.Command.sudocmd_add(\"/usr/bin/timedatectl\") api.Command.sudocmd_add(\"/usr/sbin/hwclock\") api.Command.sudocmdgroup_add(\"timecmds\") api.Command.sudocmdgroup_add_member(\"timecmds\", sudocmd=\"/usr/bin/date\") api.Command.sudocmdgroup_add_member(\"timecmds\", sudocmd=\"/usr/bin/timedatectl\") api.Command.sudocmdgroup_add_member(\"timecmds\", sudocmd=\"/usr/sbin/hwclock\") api.Command.sudorule_add_allow_command(\"timechange\", sudocmdgroup=\"timecmds\")", "api.Command.sudocmd_add(\"/usr/bin/rm\") api.Command.sudorule_add_deny_command(\"timechange\", sudocmd=\"/usr/bin/rm\")", "api.Command.sudorule_add_user(\"timechange\", user=\"bob\")", "api.Command.sudorule_add_host(\"timechange\", host=\"client.ipa.test\")", "api.Command.sudorule_add_runasuser(\"timechange\", user=\"alice\")", "api.Command.sudorule_add_runasgroup(\"timechange\", group=\"sysadmins\")", "api.Command.sudorule_add_option(\"timechange\", ipasudoopt=\"logfile='/var/log/timechange_log'\")", "api.Command.sudorule_enable(\"timechange\")", "api.Command.sudorule_disable(\"timechange\")", "api.Command.hbacrule_add(\"sshd_rule\")", "api.Command.hbacrule_add_user(\"sshd_rule\", user=\"john\")", "api.Command.hbacrule_add_user(\"sshd_rule\", group=\"developers\")", "api.Command.hbacrule_remove_user(\"sshd_rule\", user=\"john\")", "api.Command.hbacsvc_add(\"chronyd\")", "api.Command.hbacrule_add_service(\"sshd_rule\", hbacsvc=\"sshd\")", "api.Command.hbacrule_add_host(\"sshd_rule\", hostgroup=\"workstations\")", "api.Command.hbactest(user=\"john\", targethost=\"workstation.ipa.test\", service=\"sshd\", rules=\"sshd_rule\")", "api.Command.hbacrule_enable(\"sshd_rule\")", "api.Command.hbacrule_disable(\"sshd_rule\")" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_idm_api/idm-api-example-scenarios_using-idm-api
13.4. Shareable Disks in Red Hat Virtualization
13.4. Shareable Disks in Red Hat Virtualization Some applications require storage to be shared between servers. Red Hat Virtualization allows you to mark virtual machine hard disks as Shareable and attach those disks to virtual machines. That way a single virtual disk can be used by multiple cluster-aware guests. Shared disks are not to be used in every situation. For applications like clustered database servers, and other highly available services, shared disks are appropriate. Attaching a shared disk to multiple guests that are not cluster-aware is likely to cause data corruption because their reads and writes to the disk are not coordinated. You cannot take a snapshot of a shared disk. Virtual disks that have snapshots taken of them cannot later be marked shareable. You can mark a disk shareable either when you create it, or by editing the disk later.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/shareable_disks
function::sock_state_num2str
function::sock_state_num2str Name function::sock_state_num2str - Given a socket state number, return a string representation Synopsis Arguments state The state number
[ "sock_state_num2str:string(state:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sock-state-num2str
Managing cloud resources with the Dashboard
Managing cloud resources with the Dashboard Red Hat OpenStack Services on OpenShift 18.0 Viewing and configuring the Dashboard service (horizon) GUI OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/managing_cloud_resources_with_the_dashboard/index
Chapter 2. Acknowledgments
Chapter 2. Acknowledgments Red Hat Ceph Storage version 6.1 contains many contributions from the Red Hat Ceph Storage team. In addition, the Ceph project is seeing amazing growth in the quality and quantity of contributions from individuals and organizations in the Ceph community. We would like to thank all members of the Red Hat Ceph Storage team, all of the individual contributors in the Ceph community, and additionally, but not limited to, the contributions from organizations such as: Intel(R) Fujitsu (R) UnitedStack Yahoo TM Ubuntu Kylin Mellanox (R) CERN TM Deutsche Telekom Mirantis (R) SanDisk TM SUSE (R)
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/6.1_release_notes/acknowledgments
1.10. Starting a Kickstart Installation
1.10. Starting a Kickstart Installation To begin a kickstart installation, you must boot the system from boot media you have made or the Red Hat Enterprise Linux CD-ROM #1, and enter a special boot command at the boot prompt. The installation program looks for a kickstart file if the ks command line argument is passed to the kernel. CD-ROM #1 and Diskette The linux ks=floppy command also works if the ks.cfg file is located on a vfat or ext2 file system on a diskette and you boot from the Red Hat Enterprise Linux CD-ROM #1. An alternate boot command is to boot off the Red Hat Enterprise Linux CD-ROM #1 and have the kickstart file on a vfat or ext2 file system on a diskette. To do so, enter the following command at the boot: prompt: With Driver Disk If you need to use a driver disk with kickstart, specify the dd option as well. For example, to boot off a boot diskette and use a driver disk, enter the following command at the boot: prompt: Boot CD-ROM If the kickstart file is on a boot CD-ROM as described in Section 1.8.1, "Creating Kickstart Boot Media" , insert the CD-ROM into the system, boot the system, and enter the following command at the boot: prompt (where ks.cfg is the name of the kickstart file): Other options to start a kickstart installation are as follows: ks=nfs: <server> :/ <path> The installation program looks for the kickstart file on the NFS server <server> , as file <path> . The installation program uses DHCP to configure the Ethernet card. For example, if your NFS server is server.example.com and the kickstart file is in the NFS share /mydir/ks.cfg , the correct boot command would be ks=nfs:server.example.com:/mydir/ks.cfg . ks=http:// <server> / <path> The installation program looks for the kickstart file on the HTTP server <server> , as file <path> . The installation program uses DHCP to configure the Ethernet card. For example, if your HTTP server is server.example.com and the kickstart file is in the HTTP directory /mydir/ks.cfg , the correct boot command would be ks=http://server.example.com/mydir/ks.cfg . ks=floppy The installation program looks for the file ks.cfg on a vfat or ext2 file system on the diskette in /dev/fd0 . ks=floppy:/ <path> The installation program looks for the kickstart file on the diskette in /dev/fd0 , as file <path> . ks=hd: <device> :/ <file> The installation program mounts the file system on <device> (which must be vfat or ext2), and look for the kickstart configuration file as <file> in that file system (for example, ks=hd:sda3:/mydir/ks.cfg ). ks=file:/ <file> The installation program tries to read the file <file> from the file system; no mounts are done. This is normally used if the kickstart file is already on the initrd image. ks=cdrom:/ <path> The installation program looks for the kickstart file on CD-ROM, as file <path> . ks If ks is used alone, the installation program configures the Ethernet card to use DHCP. The kickstart file is read from the "bootServer" from the DHCP response as if it is an NFS server sharing the kickstart file. By default, the bootServer is the same as the DHCP server. The name of the kickstart file is one of the following: If DHCP is specified and the boot file begins with a / , the boot file provided by DHCP is looked for on the NFS server. If DHCP is specified and the boot file begins with something other then a / , the boot file provided by DHCP is looked for in the /kickstart directory on the NFS server. If DHCP did not specify a boot file, then the installation program tries to read the file /kickstart/1.2.3.4-kickstart , where 1.2.3.4 is the numeric IP address of the machine being installed. ksdevice= <device> The installation program uses this network device to connect to the network. For example, to start a kickstart installation with the kickstart file on an NFS server that is connected to the system through the eth1 device, use the command ks=nfs: <server> :/ <path> ksdevice=eth1 at the boot: prompt.
[ "linux ks=hd:fd0:/ks.cfg", "linux ks=floppy dd", "linux ks=cdrom:/ks.cfg" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/kickstart_installations-starting_a_kickstart_installation
Chapter 343. Tar File DataFormat
Chapter 343. Tar File DataFormat Available as of Camel version 2.16 The Tar File Data Format is a message compression and de-compression format. Messages can be marshalled (compressed) to Tar Files containing a single entry, and Tar Files containing a single entry can be unmarshalled (decompressed) to the original file contents. There is also a aggregation strategy that can aggregate multiple messages into a single Tar File. 343.1. TarFile Options The Tar File dataformat supports 4 options, which are listed below. Name Default Java Type Description usingIterator false Boolean If the tar file has more then one entry, the setting this option to true, allows to work with the splitter EIP, to split the data using an iterator in a streaming mode. allowEmptyDirectory false Boolean If the tar file has more then one entry, setting this option to true, allows to get the iterator even if the directory is empty preservePathElements false Boolean If the file name contains path elements, setting this option to true, allows the path to be maintained in the tar file. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 343.2. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.dataformat.tarfile.allow-empty-directory If the tar file has more then one entry, setting this option to true, allows to get the iterator even if the directory is empty false Boolean camel.dataformat.tarfile.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.tarfile.enabled Enable tarfile dataformat true Boolean camel.dataformat.tarfile.preserve-path-elements If the file name contains path elements, setting this option to true, allows the path to be maintained in the tar file. false Boolean camel.dataformat.tarfile.using-iterator If the tar file has more then one entry, the setting this option to true, allows to work with the splitter EIP, to split the data using an iterator in a streaming mode. false Boolean ND 343.3. Marshal In this example we marshal a regular text/XML payload to a compressed payload using Tar File compression, and send it to an ActiveMQ queue called MY_QUEUE. from("direct:start").marshal().tarFile().to("activemq:queue:MY_QUEUE"); The name of the Tar entry inside the created Tar File is based on the incoming CamelFileName message header, which is the standard message header used by the file component. Additionally, the outgoing CamelFileName message header is automatically set to the value of the incoming CamelFileName message header, with the ".tar" suffix. So for example, if the following route finds a file named "test.txt" in the input directory, the output will be a Tar File named "test.txt.tar" containing a single Tar entry named "test.txt": from("file:input/directory?antInclude=*/.txt").marshal().tarFile().to("file:output/directory"); If there is no incoming CamelFileName message header (for example, if the file component is not the consumer), then the message ID is used by default, and since the message ID is normally a unique generated ID, you will end up with filenames like ID-MACHINENAME-2443-1211718892437-1-0.tar . If you want to override this behavior, then you can set the value of the CamelFileName header explicitly in your route: from("direct:start").setHeader(Exchange.FILE_NAME, constant("report.txt")).marshal().tarFile().to("file:output/directory"); This route would result in a Tar File named "report.txt.tar" in the output directory, containing a single Tar entry named "report.txt". 343.4. Unmarshal In this example we unmarshal a Tar File payload from an ActiveMQ queue called MY_QUEUE to its original format, and forward it for processing to the UnTarpedMessageProcessor . from("activemq:queue:MY_QUEUE").unmarshal().tarFile().process(new UnTarpedMessageProcessor()); If the Tar File has more then one entry, the usingIterator option of TarFileDataFormat to be true, and you can use splitter to do the further work. TarFileDataFormat tarFile = new TarFileDataFormat(); tarFile.setUsingIterator(true); from("file:src/test/resources/org/apache/camel/dataformat/tarfile/?consumer.delay=1000&noop=true") .unmarshal(tarFile) .split(body(Iterator.class)) .streaming() .process(new UnTarpedMessageProcessor()) .end(); Or you can use the TarSplitter as an expression for splitter directly like this from("file:src/test/resources/org/apache/camel/dataformat/tarfile?consumer.delay=1000&noop=true") .split(new TarSplitter()) .streaming() .process(new UnTarpedMessageProcessor()) .end(); 343.5. Aggregate INFO:Please note that this aggregation strategy requires eager completion check to work properly. In this example we aggregate all text files found in the input directory into a single Tar File that is stored in the output directory. from("file:input/directory?antInclude=*/.txt") .aggregate(new TarAggregationStrategy()) .constant(true) .completionFromBatchConsumer() .eagerCheckCompletion() .to("file:output/directory"); The outgoing CamelFileName message header is created using java.io.File.createTempFile, with the ".tar" suffix. If you want to override this behavior, then you can set the value of the CamelFileName header explicitly in your route: from("file:input/directory?antInclude=*/.txt") .aggregate(new TarAggregationStrategy()) .constant(true) .completionFromBatchConsumer() .eagerCheckCompletion() .setHeader(Exchange.FILE_NAME, constant("reports.tar")) .to("file:output/directory"); 343.6. Dependencies To use Tar Files in your camel routes you need to add a dependency on camel-tarfile which implements this data format. If you use Maven you can just add the following to your pom.xml , substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-tarfile</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>
[ "from(\"direct:start\").marshal().tarFile().to(\"activemq:queue:MY_QUEUE\");", "from(\"file:input/directory?antInclude=*/.txt\").marshal().tarFile().to(\"file:output/directory\");", "from(\"direct:start\").setHeader(Exchange.FILE_NAME, constant(\"report.txt\")).marshal().tarFile().to(\"file:output/directory\");", "from(\"activemq:queue:MY_QUEUE\").unmarshal().tarFile().process(new UnTarpedMessageProcessor());", "TarFileDataFormat tarFile = new TarFileDataFormat(); tarFile.setUsingIterator(true); from(\"file:src/test/resources/org/apache/camel/dataformat/tarfile/?consumer.delay=1000&noop=true\") .unmarshal(tarFile) .split(body(Iterator.class)) .streaming() .process(new UnTarpedMessageProcessor()) .end();", "from(\"file:src/test/resources/org/apache/camel/dataformat/tarfile?consumer.delay=1000&noop=true\") .split(new TarSplitter()) .streaming() .process(new UnTarpedMessageProcessor()) .end();", "from(\"file:input/directory?antInclude=*/.txt\") .aggregate(new TarAggregationStrategy()) .constant(true) .completionFromBatchConsumer() .eagerCheckCompletion() .to(\"file:output/directory\");", "from(\"file:input/directory?antInclude=*/.txt\") .aggregate(new TarAggregationStrategy()) .constant(true) .completionFromBatchConsumer() .eagerCheckCompletion() .setHeader(Exchange.FILE_NAME, constant(\"reports.tar\")) .to(\"file:output/directory\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-tarfile</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/tarfile-dataformat
Chapter 4. Protect a web application by using OpenID Connect (OIDC) authorization code flow
Chapter 4. Protect a web application by using OpenID Connect (OIDC) authorization code flow Discover how to secure application HTTP endpoints by using the Quarkus OpenID Connect (OIDC) authorization code flow mechanism with the Quarkus OIDC extension, providing robust authentication and authorization. For more information, see OIDC code flow mechanism for protecting web applications . To learn about how well-known social providers such as Apple, Facebook, GitHub, Google, Mastodon, Microsoft, Twitch, Twitter (X), and Spotify can be used with Quarkus OIDC, see Configuring well-known OpenID Connect providers . See also, Authentication mechanisms in Quarkus . If you want to protect your service applications by using OIDC Bearer token authentication, see OIDC Bearer token authentication . 4.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.9.6 A working container runtime (Docker or Podman ) Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) 4.2. Architecture In this example, we build a simple web application with a single page: /index.html This page is protected, and only authenticated users can access it. 4.3. Solution Follow the instructions in the sections and create the application step by step. Alternatively, you can go right to the completed example. Clone the Git repository by running the git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.8 command. Alternatively, download an archive . The solution is located in the security-openid-connect-web-authentication-quickstart directory . 4.4. Create the Maven project First, we need a new project. Create a new project by running the following command: Using the Quarkus CLI: quarkus create app org.acme:security-openid-connect-web-authentication-quickstart \ --extension='resteasy-reactive,oidc' \ --no-code cd security-openid-connect-web-authentication-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn io.quarkus.platform:quarkus-maven-plugin:3.8.5:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-openid-connect-web-authentication-quickstart \ -Dextensions='resteasy-reactive,oidc' \ -DnoCode cd security-openid-connect-web-authentication-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-openid-connect-web-authentication-quickstart" If you already have your Quarkus project configured, you can add the oidc extension to your project by running the following command in your project base directory: Using the Quarkus CLI: quarkus extension add oidc Using Maven: ./mvnw quarkus:add-extension -Dextensions='oidc' Using Gradle: ./gradlew addExtension --extensions='oidc' This adds the following dependency to your build file: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-oidc") 4.5. Write the application Let's write a simple Jakarta REST resource that has all the tokens returned in the authorization code grant response injected: package org.acme.security.openid.connect.web.authentication; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.jwt.Claims; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; import io.quarkus.oidc.RefreshToken; @Path("/tokens") public class TokenResource { /** * Injection point for the ID token issued by the OpenID Connect provider */ @Inject @IdToken JsonWebToken idToken; /** * Injection point for the access token issued by the OpenID Connect provider */ @Inject JsonWebToken accessToken; /** * Injection point for the refresh token issued by the OpenID Connect provider */ @Inject RefreshToken refreshToken; /** * Returns the tokens available to the application. * This endpoint exists only for demonstration purposes. * Do not expose these tokens in a real application. * * @return an HTML page containing the tokens available to the application. */ @GET @Produces("text/html") public String getTokens() { StringBuilder response = new StringBuilder().append("<html>") .append("<body>") .append("<ul>"); Object userName = this.idToken.getClaim(Claims.preferred_username); if (userName != null) { response.append("<li>username: ").append(userName.toString()).append("</li>"); } Object scopes = this.accessToken.getClaim("scope"); if (scopes != null) { response.append("<li>scopes: ").append(scopes.toString()).append("</li>"); } response.append("<li>refresh_token: ").append(refreshToken.getToken() != null).append("</li>"); return response.append("</ul>").append("</body>").append("</html>").toString(); } } This endpoint has ID, access, and refresh tokens injected. It returns a preferred_username claim from the ID token, a scope claim from the access token, and a refresh token availability status. You only need to inject the tokens if the endpoint needs to use the ID token to interact with the currently authenticated user or use the access token to access a downstream service on behalf of this user. For more information, see the Access ID and Access Tokens section of the reference guide. 4.6. Configure the application The OIDC extension allows you to define the configuration by using the application.properties file in the src/main/resources directory. quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated This is the simplest configuration you can have when enabling authentication to your application. The quarkus.oidc.client-id property references the client_id issued by the OIDC provider, and the quarkus.oidc.credentials.secret property sets the client secret. The quarkus.oidc.application-type property is set to web-app to tell Quarkus that you want to enable the OIDC authorization code flow so that your users are redirected to the OIDC provider to authenticate. Finally, the quarkus.http.auth.permission.authenticated permission is set to tell Quarkus about the paths you want to protect. In this case, all paths are protected by a policy that ensures only authenticated users can access them. For more information, see Security Authorization Guide . 4.7. Start and configure the Keycloak server To start a Keycloak server, use Docker and run the following command: docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev where keycloak.version is set to 24.0.0 or later. You can access your Keycloak Server at localhost:8180 . To access the Keycloak Administration Console, log in as the admin user. The username and password are both admin . To create a new realm, import the realm configuration file . For more information, see the Keycloak documentation about how to create and configure a new realm . 4.8. Run the application in dev and JVM modes To run the application in dev mode, use: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev After exploring the application in dev mode, you can run it as a standard Java application. First, compile it: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Then, run it: java -jar target/quarkus-app/quarkus-run.jar 4.9. Run the application in Native mode This same demo can be compiled into native code. No modifications are required. This implies that you no longer need to install a JVM on your production environment, as the runtime technology is included in the produced binary and optimized to run with minimal resources. Compilation takes longer, so this step is turned off by default. You can build again by enabling the native build: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.package.type=native After a while, you can run this binary directly: ./target/security-openid-connect-web-authentication-quickstart-runner 4.10. Test the application To test the application, open your browser and access the following URL: http://localhost:8080/tokens If everything works as expected, you are redirected to the Keycloak server to authenticate. To authenticate to the application, enter the following credentials at the Keycloak login page: Username: alice Password: alice After clicking the Login button, you are redirected back to the application, and a session cookie will be created. The session for this demo is valid for a short period of time and, on every page refresh, you will be asked to re-authenticate. For information about how to increase the session timeouts, see the Keycloak session timeout documentation. For example, you can access the Keycloak Admin console directly from the dev UI by clicking the Keycloak Admin link if you use Dev Services for Keycloak in dev mode: For more information about writing the integration tests that depend on Dev Services for Keycloak , see the Dev Services for Keycloak section. 4.11. Summary You have learned how to set up and use the OIDC authorization code flow mechanism to protect and test application HTTP endpoints. After you have completed this tutorial, explore OIDC Bearer token authentication and other authentication mechanisms . 4.12. References Quarkus Security overview OIDC code flow mechanism for protecting web applications Configuring well-known OpenID Connect providers OpenID Connect and OAuth2 Client and Filters reference guide Dev Services for Keycloak Sign and encrypt JWT tokens with SmallRye JWT Build Choosing between OpenID Connect, SmallRye JWT, and OAuth2 authentication mechanisms Keycloak Documentation Protect Quarkus web application by using Auth0 OpenID Connect provider OpenID Connect JSON Web Token
[ "quarkus create app org.acme:security-openid-connect-web-authentication-quickstart --extension='resteasy-reactive,oidc' --no-code cd security-openid-connect-web-authentication-quickstart", "mvn io.quarkus.platform:quarkus-maven-plugin:3.8.5:create -DprojectGroupId=org.acme -DprojectArtifactId=security-openid-connect-web-authentication-quickstart -Dextensions='resteasy-reactive,oidc' -DnoCode cd security-openid-connect-web-authentication-quickstart", "quarkus extension add oidc", "./mvnw quarkus:add-extension -Dextensions='oidc'", "./gradlew addExtension --extensions='oidc'", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency>", "implementation(\"io.quarkus:quarkus-oidc\")", "package org.acme.security.openid.connect.web.authentication; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.jwt.Claims; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; import io.quarkus.oidc.RefreshToken; @Path(\"/tokens\") public class TokenResource { /** * Injection point for the ID token issued by the OpenID Connect provider */ @Inject @IdToken JsonWebToken idToken; /** * Injection point for the access token issued by the OpenID Connect provider */ @Inject JsonWebToken accessToken; /** * Injection point for the refresh token issued by the OpenID Connect provider */ @Inject RefreshToken refreshToken; /** * Returns the tokens available to the application. * This endpoint exists only for demonstration purposes. * Do not expose these tokens in a real application. * * @return an HTML page containing the tokens available to the application. */ @GET @Produces(\"text/html\") public String getTokens() { StringBuilder response = new StringBuilder().append(\"<html>\") .append(\"<body>\") .append(\"<ul>\"); Object userName = this.idToken.getClaim(Claims.preferred_username); if (userName != null) { response.append(\"<li>username: \").append(userName.toString()).append(\"</li>\"); } Object scopes = this.accessToken.getClaim(\"scope\"); if (scopes != null) { response.append(\"<li>scopes: \").append(scopes.toString()).append(\"</li>\"); } response.append(\"<li>refresh_token: \").append(refreshToken.getToken() != null).append(\"</li>\"); return response.append(\"</ul>\").append(\"</body>\").append(\"</html>\").toString(); } }", "quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated", "docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev", "quarkus dev", "./mvnw quarkus:dev", "./gradlew --console=plain quarkusDev", "quarkus build", "./mvnw install", "./gradlew build", "java -jar target/quarkus-app/quarkus-run.jar", "quarkus build --native", "./mvnw install -Dnative", "./gradlew build -Dquarkus.package.type=native", "./target/security-openid-connect-web-authentication-quickstart-runner" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/openid_connect_oidc_authentication/security-oidc-code-flow-authentication-tutorial
Chapter 2. Installing a cluster on IBM Power
Chapter 2. Installing a cluster on IBM Power In OpenShift Container Platform version 4.12, you can install a cluster on IBM Power infrastructure that you provision. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.12 on the following IBM hardware: IBM Power9 or Power10 processor-based systems Note Support for RHCOS functionality for all IBM POWER8 models, IBM POWER9 AC922, IBM POWER9 IC922, and IBM POWER9 LC922 is deprecated. These hardware models remain fully supported in OpenShift Container Platform 4.12. However, Red Hat recommends that you use later hardware models. Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power9 or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Recommended IBM Power system requirements Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power9 or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.3.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 2.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 2.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 2.9. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 2.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 2.10. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 2.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 2.11. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 2.9.2. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.4. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.12. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.13. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 2.14. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.15. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 2.16. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.17. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. Note In OpenShift Container Platform {version}, egress IP is only assigned to the primary interface. Consequentially, setting routingViaHost to true will not work for egress IP in OpenShift Container Platform {version}. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 2.18. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program (without an architecture postfix) runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 2.12.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.12.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.12.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.12-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform 4.9 or later, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 2.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 2.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.16.1.1. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.19. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_power/installing-ibm-power
OpenShift sandboxed containers
OpenShift sandboxed containers OpenShift Container Platform 4.14 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/openshift_sandboxed_containers/index
Chapter 2. Configuring an IBM Cloud account
Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud(R) account. 2.1. Prerequisites You have an IBM Cloud(R) account with a subscription. You cannot install OpenShift Container Platform on a free or on a trial IBM Cloud(R) account. 2.2. Quotas and limits on IBM Power Virtual Server The OpenShift Container Platform cluster uses several IBM Cloud(R) and IBM Power(R) Virtual Server components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud(R) account. For a comprehensive list of the default IBM Cloud(R) quotas and service limits, see the IBM Cloud(R) documentation for Quotas and service limits . Virtual Private Cloud Each OpenShift Container Platform cluster creates its own Virtual Private Cloud (VPC). The default quota of VPCs per region is 10. If you have 10 VPCs created, you will need to increase your quota before attempting an installation. Application load balancer By default, each cluster creates two application load balancers (ALBs): Internal load balancer for the control plane API server External load balancer for the control plane API server You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Power(R) Virtual Server. Transit Gateways Each OpenShift Container Platform cluster creates its own Transit Gateway to enable communication with a VPC. The default quota of transit gateways per account is 10. If you have 10 transit gateways created, you will need to increase your quota before attempting an installation. Dynamic Host Configuration Protocol Service There is a limit of one Dynamic Host Configuration Protocol (DHCP) service per IBM Power(R) Virtual Server instance. Networking Due to networking limitations, there is a restriction of one OpenShift cluster installed through IPI per zone per account. This is not configurable. Virtual Server Instances By default, a cluster creates server instances with the following resources : 0.5 CPUs 32 GB RAM System Type: s922 Processor Type: uncapped , shared Storage Tier: Tier-3 The following nodes are created: One bootstrap machine, which is removed after the installation is complete Three control plane nodes Three compute nodes For more information, see Creating a Power Systems Virtual Server in the IBM Cloud(R) documentation. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud(R) Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud(R) DNS Services (DNS Services). 2.4. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud(R) Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Log in to IBM Cloud(R) by using the CLI: USD ibmcloud login Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard- 1 1 At a minimum, you require a Standard plan for CIS to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_CRN> 1 1 The instance CRN (Cloud Resource Name). For example: ibmcloud cis instance-set crn:v1:bluemix:public:power-iaas:osa21:a/65b64c1f1c29460d8c2e4bbfbd893c2c:c09233ac-48a5-4ccb-a051-d1cfb3fc7eb5:: Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud(R) documentation . 2.5. IBM Cloud IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud(R) account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud(R) service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud(R) IAM overview, see the IBM Cloud(R) documentation . 2.5.1. Pre-requisite permissions Table 2.1. Pre-requisite permissions Role Access Viewer, Operator, Editor, Administrator, Reader, Writer, Manager Internet Services service in <resource_group> resource group Viewer, Operator, Editor, Administrator, User API key creator, Service ID creator IAM Identity Service service Viewer, Operator, Administrator, Editor, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service in <resource_group> resource group Viewer Resource Group: Access to view the resource group itself. The resource type should equal Resource group , with a value of <your_resource_group_name>. 2.5.2. Cluster-creation permissions Table 2.2. Cluster-creation permissions Role Access Viewer <resource_group> (Resource Group Created for Your Team) Viewer, Operator, Editor, Reader, Writer, Manager All Identity and IAM enabled services in Default resource group Viewer, Reader Internet Services service Viewer, Operator, Reader, Writer, Manager, Content Reader, Object Reader, Object Writer, Editor Cloud Object Storage service Viewer Default resource group: The resource type should equal Resource group , with a value of Default . If your account administrator changed your account's default resource group to something other than Default, use that value instead. Viewer, Operator, Editor, Reader, Manager Workspace for IBM Power(R) Virtual Server service in <resource_group> resource group Viewer, Operator, Editor, Reader, Writer, Manager, Administrator Internet Services service in <resource_group> resource group: CIS functional scope string equals reliability Viewer, Operator, Editor Transit Gateway service Viewer, Operator, Editor, Administrator, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service <resource_group> resource group 2.5.3. Access policy assignment In IBM Cloud(R) IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.5.4. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud(R) account. Prerequisites You have assigned the required access policies to your IBM Cloud(R) account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud(R) API keys, see Understanding API keys . 2.6. Supported IBM Power Virtual Server regions and zones You can deploy an OpenShift Container Platform cluster to the following regions: dal (Dallas, USA) dal10 dal12 eu-de (Frankfurt, Germany) eu-de-1 eu-de-2 lon (London, UK) lon04 mad (Madrid, Spain) mad02 mad04 osa (Osaka, Japan) osa21 sao (Sao Paulo, Brazil) sao01 sao04 syd (Sydney, Australia) syd04 wdc (Washington DC, USA) wdc06 wdc07 You might optionally specify the IBM Cloud(R) region in which the installer will create any VPC components. Supported regions in IBM Cloud(R) are: us-south eu-de eu-es eu-gb jp-osa au-syd br-sao ca-tor jp-tok 2.7. steps Creating an IBM Power(R) Virtual Server workspace
[ "ibmcloud plugin install cis", "ibmcloud login", "ibmcloud cis instance-create <instance_name> standard-next 1", "ibmcloud cis instance-set <instance_CRN> 1", "ibmcloud cis domain-add <domain_name> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_power_virtual_server/installing-ibm-cloud-account-power-vs
Chapter 31. Viewing Rule Name column in guided decision tables
Chapter 31. Viewing Rule Name column in guided decision tables You can view the Rule Name column in the guided decision table if needed. Procedure In the guided decision tables designer, click Columns . Select the Show rule name column check box. Click Finish to save. The default rule name format is Row (row_number)(table_name) . The Source contains the default value if you do not specify a rule name. In the guided decision table, you can add a rule name in the Rule Name column and override the default value.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/guided-decision-tables-rulename-column-view-proc
Chapter 2. OAuthAccessToken [oauth.openshift.io/v1]
Chapter 2. OAuthAccessToken [oauth.openshift.io/v1] Description OAuthAccessToken describes an OAuth access token. The name of a token must be prefixed with a sha256~ string, must not contain "/" or "%" characters and must be at least 32 characters long. The name of the token is constructed from the actual token by sha256-hashing it and using URL-safe unpadded base64-encoding (as described in RFC4648) on the hashed result. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources authorizeToken string AuthorizeToken contains the token that authorized this token clientName string ClientName references the client that created this token. expiresIn integer ExpiresIn is the seconds from CreationTime before this token expires. inactivityTimeoutSeconds integer InactivityTimeoutSeconds is the value in seconds, from the CreationTimestamp, after which this token can no longer be used. The value is automatically incremented when the token is used. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta redirectURI string RedirectURI is the redirection associated with the token. refreshToken string RefreshToken is the value by which this token can be renewed. Can be blank. scopes array (string) Scopes is an array of the requested scopes. userName string UserName is the user name associated with this token userUID string UserUID is the unique UID associated with this token 2.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthaccesstokens DELETE : delete collection of OAuthAccessToken GET : list or watch objects of kind OAuthAccessToken POST : create an OAuthAccessToken /apis/oauth.openshift.io/v1/watch/oauthaccesstokens GET : watch individual changes to a list of OAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthaccesstokens/{name} DELETE : delete an OAuthAccessToken GET : read the specified OAuthAccessToken PATCH : partially update the specified OAuthAccessToken PUT : replace the specified OAuthAccessToken /apis/oauth.openshift.io/v1/watch/oauthaccesstokens/{name} GET : watch changes to an object of kind OAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/oauth.openshift.io/v1/oauthaccesstokens Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OAuthAccessToken Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthAccessToken Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessTokenList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthAccessToken Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body OAuthAccessToken schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 201 - Created OAuthAccessToken schema 202 - Accepted OAuthAccessToken schema 401 - Unauthorized Empty 2.2.2. /apis/oauth.openshift.io/v1/watch/oauthaccesstokens Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of OAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/oauth.openshift.io/v1/oauthaccesstokens/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the OAuthAccessToken Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OAuthAccessToken Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 202 - Accepted OAuthAccessToken schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthAccessToken Table 2.17. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthAccessToken Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 201 - Created OAuthAccessToken schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthAccessToken Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body OAuthAccessToken schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK OAuthAccessToken schema 201 - Created OAuthAccessToken schema 401 - Unauthorized Empty 2.2.4. /apis/oauth.openshift.io/v1/watch/oauthaccesstokens/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the OAuthAccessToken Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind OAuthAccessToken. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/oauth_apis/oauthaccesstoken-oauth-openshift-io-v1
Chapter 15. Red Hat build of Kogito microservice deployment troubleshooting
Chapter 15. Red Hat build of Kogito microservice deployment troubleshooting Use the information in this section to troubleshoot issues that you might encounter when using the operator to deploy Red Hat build of Kogito microservices. The following information is updated as new issues and workarounds are discovered. No builds are running If you do not see any builds running nor any resources created in the relevant namespace, enter the following commands to retrieve running pods and to view the operator log for the pod: View RHPAM Kogito Operator log for a specified pod Verify KogitoRuntime status If you create, for example, KogitoRuntime application with a non-existing image using the following YAML definition: Example YAML definition for a KogitoRuntime application apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example # Application name spec: image: 'not-existing-image:latest' replicas: 1 You can verify the status of the KogitoRuntime application using the oc describe KogitoRuntime example command in the bash console. When you run the oc describe KogitoRuntime example command in the bash console, you receive the following output: Example KogitoRuntime status At the end of the output, you can see the KogitoRuntime status with a relevant message.
[ "// Retrieves running pods oc get pods NAME READY STATUS RESTARTS AGE kogito-operator-6d7b6d4466-9ng8t 1/1 Running 0 26m // Opens RHPAM Kogito Operator log for the pod oc logs -f kogito-operator-6d7b6d4466-9ng8t", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example # Application name spec: image: 'not-existing-image:latest' replicas: 1", "[user@localhost ~]USD oc describe KogitoRuntime example Name: example Namespace: username-test Labels: <none> Annotations: <none> API Version: rhpam.kiegroup.org/v1 Kind: KogitoRuntime Metadata: Creation Timestamp: 2021-05-20T07:19:41Z Generation: 1 Managed Fields: API Version: rhpam.kiegroup.org/v1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:image: f:replicas: Manager: Mozilla Operation: Update Time: 2021-05-20T07:19:41Z API Version: rhpam.kiegroup.org/v1 Fields Type: FieldsV1 fieldsV1: f:spec: f:monitoring: f:probes: .: f:livenessProbe: f:readinessProbe: f:resources: f:runtime: f:status: .: f:cloudEvents: f:conditions: Manager: main Operation: Update Time: 2021-05-20T07:19:45Z Resource Version: 272185 Self Link: /apis/rhpam.kiegroup.org/v1/namespaces/ksuta-test/kogitoruntimes/example UID: edbe0bf1-554e-4523-9421-d074070df982 Spec: Image: not-existing-image:latest Replicas: 1 Status: Cloud Events: Conditions: Last Transition Time: 2021-05-20T07:19:44Z Message: Reason: NoPodAvailable Status: False Type: Deployed Last Transition Time: 2021-05-20T07:19:44Z Message: Reason: RequestedReplicasNotEqualToAvailableReplicas Status: True Type: Provisioning Last Transition Time: 2021-05-20T07:19:45Z Message: you may not have access to the container image \"quay.io/kiegroup/not-existing-image:latest\" Reason: ImageStreamNotReadyReason Status: True Type: Failed" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/ref-kogito-microservice-deploy-troubleshooting_deploying-kogito-microservices-on-openshift
Chapter 4. Advisories related to this release
Chapter 4. Advisories related to this release The following advisories are issued to bug fixes and CVE fixes included in this release: RHSA-2023:0194 RHSA-2023:0195 RHSA-2023:0196 RHSA-2023:0197 RHSA-2023:0198 RHSA-2023:0199 RHSA-2023:0200 RHSA-2023:0201 RHSA-2023:0202 Revised on 2024-05-09 16:47:09 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.18/rn-openjdk11018-advisory_openjdk
Chapter 4. Install Pacemaker
Chapter 4. Install Pacemaker Refer to Configuring and managing high availability clusters documentation to set up a pacemaker cluster. Below is a sample procedure to install a Pacemaker cluster. It's recommended to work with a Red Hat consultant to install and configure Pacemaker in your environment. 4.1. Install Pacemaker rpms # yum -y install pcs pacemaker # passwd hacluster [provide a password] # systemctl enable --now pcsd.service 4.2. Create a cluster Create a cluster named s4ha , consisting of s4node1 and s4node2 , and start the cluster. Please note that at this point, the cluster is not yet configured to auto-start after reboot. # pcs cluster auth s4node1 s4node2 # pcs cluster setup --name s4ha s4node1 s4node2 # pcs cluster start --all 4.2.1. Define general cluster properties Set the resource stickiness: [root]# pcs resource defaults resource-stickiness=1 [root]# pcs resource defaults migration-threshold=3 4.3. Configure STONITH The fencing mechanism STONITH depends on the underlying platform. Please check the document Support Policies for RHEL High Availability Clusters - General Requirements for Fencing/STONITH to configure the STONITH. After configuring the STONITH, on s4node1 test by fencing s4node2 and vice-versa: [root@s4node1]# pcs stonith fence s4node2 The s4node2 should be properly fenced. After fencing, start cluster on s4node2 using the following command because the cluster has not yet been enabled to auto-start. Auto-start is enabled after initial testing. [root@s4node2 ~]# pcs cluster start 4.4. Install resource-agents-sap on all cluster nodes [root]# yum install resource-agents-sap 4.5. Configure cluster resources for shared filesystems Configure shared filesystem to provide following mount points on all the cluster nodes. /sapmnt /usr/sap/trans /usr/sap/S4H/SYS 4.5.1. Configure shared filesystems managed by the cluster The cloned Filesystem cluster resource can be used to mount the shares from external NFS server on all cluster nodes as shown below. [root]# pcs resource create s4h_fs_sapmnt Filesystem \ device='<NFS_Server>:<sapmnt_nfs_share>' directory='/sapmnt' \ fstype='nfs' --clone interleave=true [root]# pcs resource create s4h_fs_sap_trans Filesystem \ device='<NFS_Server>:<sap_trans_nfs_share>' directory='/usr/sap/trans' \ fstype='nfs' --clone interleave=true [root]# pcs resource create s4h_fs_sap_sys Filesystem \ device='<NFS_Server>:<s4h_sys_nfs_share>' directory='/usr/sap/S4H/SYS' \ fstype='nfs' --clone interleave=true After creating the Filesystem resources verify that they have started properly on all nodes. [root]# pcs status ... Clone Set: s4h_fs_sapmnt-clone [s4h_fs_sapmnt] Started: [ s4node1 s4node2 ] Clone Set: s4h_fs_sap_trans-clone [s4h_fs_sap_trans] Started: [ s4node1 s4node2 ] Clone Set: s4h_fs_sys-clone [s4h_fs_sys] Started: [ s4node1 s4node2 ] ... 4.6. Configure ASCS resource group 4.6.1. Create resource for virtual IP address [root]# pcs resource create s4h_vip_ascs20 IPaddr2 ip=192.168.200.201 \ --group s4h_ASCS20_group 4.6.2. Create resource for ASCS filesystem. Below is the example of creating resource for NFS filesystem [root]# pcs resource create s4h_fs_ascs20 Filesystem \ device='<NFS_Server>:<s4h_ascs20_nfs_share>' \ directory=/usr/sap/S4H/ASCS20 fstype=nfs force_unmount=safe \ --group s4h_ASCS20_group op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40 Below is the example of creating resources for HA-LVM filesystem [root]# pcs resource create s4h_fs_ascs20_lvm LVM \ volgrpname='<ascs_volume_group>' exclusive=true \ --group s4h_ASCS20_group [root]# pcs resource create s4h_fs_ascs20 Filesystem \ device='/dev/mapper/<ascs_logical_volume>' \ directory=/usr/sap/S4H/ASCS20 fstype=ext4 \ --group s4h_ASCS20_group 4.6.3. Create resource for ASCS instance [root]# pcs resource create s4h_ascs20 SAPInstance \ InstanceName="S4H_ASCS20_s4ascs" \ START_PROFILE=/sapmnt/S4H/profile/S4H_ASCS20_s4ascs \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 \ --group s4h_ASCS20_group \ op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=600 Note : meta resource-stickiness=5000 is here to balance out the failover constraint with ERS so the resource stays on the node where it started and doesn't migrate around the cluster uncontrollably. Add a resource stickiness to the group to ensure that the ASCS will stay on a node if possible: [root]# pcs resource meta s4h_ASCS20_group resource-stickiness=3000 4.7. Configure ERS resource group 4.7.1. Create resource for virtual IP address [root]# pcs resource create s4h_vip_ers29 IPaddr2 ip=192.168.200.202 \ --group s4h_ERS29_group 4.7.2. Create resource for ERS filesystem Below is the example of creating resource for NFS filesystem [root]# pcs resource create s4h_fs_ers29 Filesystem \ device='<NFS_Server>:<s4h_ers29_nfs_share>' \ directory=/usr/sap/S4H/ERS29 fstype=nfs force_unmount=safe \ --group s4h_ERS29_group op start interval=0 timeout=60 \ op stop interval=0 timeout=120 op monitor interval=200 timeout=40 Below is the example of creating resources for HA-LVM filesystem [root]# pcs resource create s4h_fs_ers29_lvm LVM \ volgrpname='<ers_volume_group>' exclusive=true --group s4h_ERS29_group [root]# pcs resource create s4h_fs_ers29 Filesystem \ device='/dev/mapper/<ers_logical_volume>' directory=/usr/sap/S4H/ERS29 \ fstype=ext4 --group s4h_ERS29_group 4.7.3. Create resource for ERS instance Create an ERS instance cluster resource. Note : In ENSA2 deployments the IS_ERS attribute is optional. To learn more about IS_ERS , additional information can be found in How does the IS_ERS attribute work on a SAP NetWeaver cluster with Standalone Enqueue Server (ENSA1 and ENSA2)? . [root]# pcs resource create s4h_ers29 SAPInstance \ InstanceName="S4H_ERS29_s4ers" \ START_PROFILE=/sapmnt/S4H/profile/S4H_ERS29_s4ers \ AUTOMATIC_RECOVER=false \ --group s4h_ERS29_group \ op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=600 4.8. Create constraints 4.8.1. Create colocation constraint for ASCS and ERS resource groups Resource groups s4h_ASCS20_group and s4h_ERS29_group should try to avoid running on the same node. Order of groups matters. [root]# pcs constraint colocation add s4h_ERS29_group with s4h_ASCS20_group \ -5000 4.8.2. Create order constraint for ASCS and ERS resource groups Prefer to start s4h_ASCS20_group before the s4h_ERS29_group [root]# pcs constraint order start s4h_ASCS20_group then start \ s4h_ERS29_group symmetrical=false kind=Optional [root]# pcs constraint order start s4h_ASCS20_group then stop \ s4h_ERS29_group symmetrical=false kind=Optional 4.8.3. Create order constraint for /sapmnt resource managed by cluster If the shared filesystem /sapmnt is managed by the cluster, then the following constraints ensure that resource groups with ASCS and ERS SAPInstance resources are started only once the filesystem is available. [root]# pcs constraint order s4h_fs_sapmnt-clone then s4h_ASCS20_group [root]# pcs constraint order s4h_fs_sapmnt-clone then s4h_ERS29_group
[ "yum -y install pcs pacemaker passwd hacluster systemctl enable --now pcsd.service", "pcs cluster auth s4node1 s4node2 pcs cluster setup --name s4ha s4node1 s4node2 pcs cluster start --all", "pcs resource defaults resource-stickiness=1 pcs resource defaults migration-threshold=3", "pcs stonith fence s4node2", "pcs cluster start", "yum install resource-agents-sap", "pcs resource create s4h_fs_sapmnt Filesystem device='<NFS_Server>:<sapmnt_nfs_share>' directory='/sapmnt' fstype='nfs' --clone interleave=true pcs resource create s4h_fs_sap_trans Filesystem device='<NFS_Server>:<sap_trans_nfs_share>' directory='/usr/sap/trans' fstype='nfs' --clone interleave=true pcs resource create s4h_fs_sap_sys Filesystem device='<NFS_Server>:<s4h_sys_nfs_share>' directory='/usr/sap/S4H/SYS' fstype='nfs' --clone interleave=true", "pcs status Clone Set: s4h_fs_sapmnt-clone [s4h_fs_sapmnt] Started: [ s4node1 s4node2 ] Clone Set: s4h_fs_sap_trans-clone [s4h_fs_sap_trans] Started: [ s4node1 s4node2 ] Clone Set: s4h_fs_sys-clone [s4h_fs_sys] Started: [ s4node1 s4node2 ]", "pcs resource create s4h_vip_ascs20 IPaddr2 ip=192.168.200.201 --group s4h_ASCS20_group", "pcs resource create s4h_fs_ascs20 Filesystem device='<NFS_Server>:<s4h_ascs20_nfs_share>' directory=/usr/sap/S4H/ASCS20 fstype=nfs force_unmount=safe --group s4h_ASCS20_group op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40", "pcs resource create s4h_fs_ascs20_lvm LVM volgrpname='<ascs_volume_group>' exclusive=true --group s4h_ASCS20_group pcs resource create s4h_fs_ascs20 Filesystem device='/dev/mapper/<ascs_logical_volume>' directory=/usr/sap/S4H/ASCS20 fstype=ext4 --group s4h_ASCS20_group", "pcs resource create s4h_ascs20 SAPInstance InstanceName=\"S4H_ASCS20_s4ascs\" START_PROFILE=/sapmnt/S4H/profile/S4H_ASCS20_s4ascs AUTOMATIC_RECOVER=false meta resource-stickiness=5000 --group s4h_ASCS20_group op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600", "pcs resource meta s4h_ASCS20_group resource-stickiness=3000", "pcs resource create s4h_vip_ers29 IPaddr2 ip=192.168.200.202 --group s4h_ERS29_group", "pcs resource create s4h_fs_ers29 Filesystem device='<NFS_Server>:<s4h_ers29_nfs_share>' directory=/usr/sap/S4H/ERS29 fstype=nfs force_unmount=safe --group s4h_ERS29_group op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40", "pcs resource create s4h_fs_ers29_lvm LVM volgrpname='<ers_volume_group>' exclusive=true --group s4h_ERS29_group pcs resource create s4h_fs_ers29 Filesystem device='/dev/mapper/<ers_logical_volume>' directory=/usr/sap/S4H/ERS29 fstype=ext4 --group s4h_ERS29_group", "pcs resource create s4h_ers29 SAPInstance InstanceName=\"S4H_ERS29_s4ers\" START_PROFILE=/sapmnt/S4H/profile/S4H_ERS29_s4ers AUTOMATIC_RECOVER=false --group s4h_ERS29_group op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600", "pcs constraint colocation add s4h_ERS29_group with s4h_ASCS20_group -5000", "pcs constraint order start s4h_ASCS20_group then start s4h_ERS29_group symmetrical=false kind=Optional pcs constraint order start s4h_ASCS20_group then stop s4h_ERS29_group symmetrical=false kind=Optional", "pcs constraint order s4h_fs_sapmnt-clone then s4h_ASCS20_group pcs constraint order s4h_fs_sapmnt-clone then s4h_ERS29_group" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_a_cost-optimized_sap_s4hana_ha_cluster_hana_system_replication_ensa2_using_the_rhel_ha_add-on/asmb_cco_install_pacemaker_configuring-cost-optimized-sap-v9
Image APIs
Image APIs OpenShift Container Platform 4.17 Reference guide for image APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/image_apis/index
probe::ioblock.end
probe::ioblock.end Name probe::ioblock.end - Fires whenever a block I/O transfer is complete. Synopsis ioblock.end Values name name of the probe point sector beginning sector for the entire bio hw_segments number of segments after physical and DMA remapping hardware coalescing is performed phys_segments number of segments in this bio after physical address coalescing is performed. flags see below BIO_UPTODATE 0 ok after I/O completion BIO_RW_BLOCK 1 RW_AHEAD set, and read/write would block BIO_EOF 2 out-out-bounds error BIO_SEG_VALID 3 nr_hw_seg valid BIO_CLONED 4 doesn't own data BIO_BOUNCED 5 bio is a bounce bio BIO_USER_MAPPED 6 contains user pages BIO_EOPNOTSUPP 7 not supported devname block device name bytes_done number of bytes transferred error 0 on success size total size in bytes idx offset into the bio vector array vcnt bio vector count which represents number of array element (page, offset, length) which makes up this I/O request ino i-node number of the mapped file rw binary trace for read/write request Context The process signals the transfer is done.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ioblock-end
Edge computing
Edge computing OpenShift Container Platform 4.16 Configure and deploy OpenShift Container Platform clusters at the network edge Red Hat OpenShift Documentation Team
[ "export ISO_IMAGE_NAME=<iso_image_name> 1", "export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1", "export OCP_VERSION=<ocp_version> 1", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.16/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}", "sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.16/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}", "wget http://USD(hostname)/USD{ISO_IMAGE_NAME}", "Saving to: rhcos-4.16.1-x86_64-live.x86_64.iso rhcos-4.16.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s", "oc edit AgentServiceConfig", "- cpuArchitecture: x86_64 openshiftVersion: \"4.16\" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso", "apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: | 2 -----BEGIN CERTIFICATE----- <certificate_contents> -----END CERTIFICATE----- registries.conf: | 3 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/example-repository\" 4 mirror-by-digest-only = true [[registry.mirror]] location = \"mirror1.registry.corp.com:5000/example-repository\" 5", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> 3 url: <iso_url> 4", "oc edit AgentServiceConfig agent", "apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com", "oc debug node/<node_name>", "sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry>", "Login Succeeded!", "{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json", "oc apply -k out/argocd/deployment", "oc -n openshift-gitops get applications.argoproj.io clusters -o jsonpath='{.spec.syncPolicy.syncOptions}' |jq", "[ \"CreateNamespace=true\", \"PrunePropagationPolicy=background\", \"RespectIgnoreDifferences=true\" ]", "kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background", "podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 extract /home/ztp --tar | tar x -C ./out", "example/ ├── acmpolicygenerator │ ├── kustomization.yaml │ └── source-crs/ ├── policygentemplates 1 │ ├── kustomization.yaml │ └── source-crs/ └── siteconfig ├── extra-manifests └── kustomization.yaml", "example/ ├── acmpolicygenerator │ ├── acm-common-ranGen.yaml │ ├── acm-example-sno-site.yaml │ ├── acm-group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ ├── source-crs/ │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── extra-manifests/ 1 ├── custom-manifests/ 2 ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml", "├── acmpolicygenerator │ ├── kustomization.yaml 1 │ ├── version_4.13 2 │ │ ├── common-ranGen.yaml │ │ ├── group-du-sno-ranGen.yaml │ │ ├── group-du-sno-validator-ranGen.yaml │ │ ├── helix56-v413.yaml │ │ ├── kustomization.yaml 3 │ │ ├── ns.yaml │ │ └── source-crs/ 4 │ │ └── reference-crs/ 5 │ │ └── custom-crs/ 6 │ └── version_4.14 7 │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── helix56-v414.yaml │ ├── kustomization.yaml 8 │ ├── ns.yaml │ └── source-crs/ 9 │ └── reference-crs/ 10 │ └── custom-crs/ 11 └── siteconfig ├── kustomization.yaml ├── version_4.13 │ ├── helix56-v413.yaml │ ├── kustomization.yaml │ ├── extra-manifest/ 12 │ └── custom-manifest/ 13 └── version_4.14 ├── helix57-v414.yaml ├── kustomization.yaml ├── extra-manifest/ 14 └── custom-manifest/ 15", "extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2", "resources: - version_4.13 1 #- version_4.14 2", "mkdir -p ./update", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 extract /home/ztp --tar | tar x -C ./update", "oc get managedcluster -l 'local-cluster!=true'", "oc label managedcluster -l 'local-cluster!=true' ztp-done=", "oc delete -f update/argocd/deployment/clusters-app.yaml", "oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge", "oc delete -f update/argocd/deployment/policies-app.yaml", "├── acmpolicygenerator │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - acm-common-ranGen.yaml - acm-group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - acm-group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml", "{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }", "oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json", "oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json", "oc apply -k out/argocd/deployment", "grep -r \"ztp-deploy-wave\" out/source-crs", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: \"1\" name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" spec: clusterRef: name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: \"{{ .Site.SshPublicKey }}\" proxy: \"{{ .Cluster.ProxySettings }}\" pullSecretRef: name: \"{{ .Site.PullSecretRef.Name }}\" ignitionConfigOverride: \"{{ .Cluster.IgnitionConfigOverride }}\" nmStateConfigLabelSelector: matchLabels: nmstate-label: \"{{ .Cluster.ClusterName }}\" additionalNTPSources: \"{{ .Cluster.AdditionalNTPSources }}\"", "~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml", "clusters: crTemplates: InfraEnv: \"InfraEnv-example.yaml\"", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "export CLUSTERNS=example-sno", "oc create namespace USDCLUSTERNS", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.16\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\", \"Ingress\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "oc describe node example-node.example.com", "Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos", "apiVersion: ran.openshift.io/v2 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: # clusterLabels: common: true group-du-sno: \"\" sites : \"example-sno\" accelerated-ztp: full", "interfaces: - name: hosta_conn type: ipsec libreswan: left: <cluster_node> 1 leftid: '%fromcert' leftmodecfgclient: false leftcert: <left_cert> 2 leftrsasigkey: '%cert' right: <external_host> 3 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> 4 ikev2: insist 5 type: tunnel", "out └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-endpoint-config.bu 1 ├── 99-ipsec-master-endpoint-config.yaml ├── 99-ipsec-worker-endpoint-config.bu ├── 99-ipsec-worker-endpoint-config.yaml ├── build.sh ├── ca.pem 2 ├── left_server.p12 ├── enable-ipsec.yaml ├── ipsec-endpoint-config.yml └── README.md", "siteconfig ├── site1-sno-du.yaml ├── extra-manifest/ └── custom-manifest ├── enable-ipsec.yaml ├── 99-ipsec-worker-endpoint-config.yaml └── 99-ipsec-master-endpoint-config.yaml", "clusters: - clusterName: \"site1-sno-du\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ - custom-manifest/", "oc debug node/<node_name>", "sh-5.1# ip xfrm policy", "src 172.16.123.0/24 dst 10.1.232.10/32 dir out priority 1757377 ptype main tmpl src 10.1.28.190 dst 10.1.232.10 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir fwd priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir in priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel", "sh-5.1# ip xfrm state", "src 10.1.232.10 dst 10.1.28.190 proto esp spi 0xa62a05aa reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0x8c59f680c8ea1e667b665d8424e2ab749cec12dc 96 enc cbc(aes) 0x2818a489fe84929c8ab72907e9ce2f0eac6f16f2258bd22240f4087e0326badb anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 src 10.1.28.190 dst 10.1.232.10 proto esp spi 0x8e96e9f9 reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0xd960ddc0a6baaccb343396a51295e08cfd8aaddd 96 enc cbc(aes) 0x0273c02e05b4216d5e652de3fc9b3528fea94648bc2b88fa01139fdf0beb27ab anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000", "sh-5.1# ping 172.16.110.8", "sh-5.1# ping 172.16.110.8 PING 172.16.110.8 (172.16.110.8) 56(84) bytes of data. 64 bytes from 172.16.110.8: icmp_seq=1 ttl=64 time=153 ms 64 bytes from 172.16.110.8: icmp_seq=2 ttl=64 time=155 ms", "export CLUSTER=<clusterName>", "oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq", "curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'", "oc get AgentClusterInstall -n <cluster_name>", "oc get managedcluster", "oc get applications.argoproj.io -n openshift-gitops clusters -o yaml", "syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the \"SiteConfig\" CRD is installed on the destination cluster", "siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory", "Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"disableVirtualMediaTLS\": true}}'", "kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background", "oc delete policy -n <namespace> <policy_name>", "oc delete -k out/argocd/deployment", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 extract /home/ztp --tar | tar x -C ./out", "out └── argocd └── example ├── acmpolicygenerator │ ├── {policy-prefix}common-ranGen.yaml │ ├── {policy-prefix}example-sno-site.yaml │ ├── {policy-prefix}group-du-sno-ranGen.yaml │ ├── {policy-prefix}group-du-sno-validator-ranGen.yaml │ ├── │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml", "mkdir -p ./site-install", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.16\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\", \"Ingress\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 generator install site-1-sno.yaml /output", "site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml", "mkdir -p ./site-machineconfig", "podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 generator install -E site-1-sno.yaml /output", "site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml", "mkdir -p ./ref", "podman run -it --rm -v `pwd`/out/argocd/example/acmpolicygenerator:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 generator config -N . /output", "ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs", "oc describe node example-node.example.com", "Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos", "apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <cluster_name> namespace: <cluster_name> spec: kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 clusterRef: name: <cluster_name> namespace: <cluster_name> pullSecretRef: name: pull-secret", "ssh -i /path/to/privatekey core@<host_name>", "cat /proc/cmdline", "apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.16.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.16.0-x86_64 2", "oc apply -f clusterImageSet-4.16.yaml", "apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2", "oc apply -f cluster-namespace.yaml", "oc apply -R ./site-install/site-sno-1", "oc get managedcluster", "oc get agent -n <cluster_name>", "oc describe agent -n <cluster_name>", "oc get agentclusterinstall -n <cluster_name>", "oc describe agentclusterinstall -n <cluster_name>", "oc get managedclusteraddon -n <cluster_name>", "oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig", "oc get managedcluster", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h", "oc get clusterdeployment -n <cluster_name>", "NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h", "oc describe agentclusterinstall -n <cluster_name> <cluster_name>", "oc delete managedcluster <cluster_name>", "oc delete namespace <cluster_name>", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"<site_name>\" namespace: \"<site_name>\" spec: baseDomain: \"example.com\" cpuPartitioningMode: AllNodes 1", "oc debug node/example-sno-1", "sh-4.4# pgrep ovn | while read i; do taskset -cp USDi; done", "pid 8481's current affinity list: 0-1,52-53 pid 8726's current affinity list: 0-1,52-53 pid 9088's current affinity list: 0-1,52-53 pid 9945's current affinity list: 0-1,52-53 pid 10387's current affinity list: 0-1,52-53 pid 12123's current affinity list: 0-1,52-53 pid 13313's current affinity list: 0-1,52-53", "sh-4.4# pgrep systemd | while read i; do taskset -cp USDi; done", "pid 1's current affinity list: 0-1,52-53 pid 938's current affinity list: 0-1,52-53 pid 962's current affinity list: 0-1,52-53 pid 1197's current affinity list: 0-1,52-53", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-kdump-config-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-kdump-config-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump-remove-ice-module.service contents: | [Unit] Description=Remove ice module when doing kdump Before=kdump.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/usr/local/bin/kdump-remove-ice-module.sh [Install] WantedBy=multi-user.target storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvdXNyL2Jpbi9lbnYgYmFzaAoKIyBUaGlzIHNjcmlwdCByZW1vdmVzIHRoZSBpY2UgbW9kdWxlIGZyb20ga2R1bXAgdG8gcHJldmVudCBrZHVtcCBmYWlsdXJlcyBvbiBjZXJ0YWluIHNlcnZlcnMuCiMgVGhpcyBpcyBhIHRlbXBvcmFyeSB3b3JrYXJvdW5kIGZvciBSSEVMUExBTi0xMzgyMzYgYW5kIGNhbiBiZSByZW1vdmVkIHdoZW4gdGhhdCBpc3N1ZSBpcwojIGZpeGVkLgoKc2V0IC14CgpTRUQ9Ii91c3IvYmluL3NlZCIKR1JFUD0iL3Vzci9iaW4vZ3JlcCIKCiMgb3ZlcnJpZGUgZm9yIHRlc3RpbmcgcHVycG9zZXMKS0RVTVBfQ09ORj0iJHsxOi0vZXRjL3N5c2NvbmZpZy9rZHVtcH0iClJFTU9WRV9JQ0VfU1RSPSJtb2R1bGVfYmxhY2tsaXN0PWljZSIKCiMgZXhpdCBpZiBmaWxlIGRvZXNuJ3QgZXhpc3QKWyAhIC1mICR7S0RVTVBfQ09ORn0gXSAmJiBleGl0IDAKCiMgZXhpdCBpZiBmaWxlIGFscmVhZHkgdXBkYXRlZAoke0dSRVB9IC1GcSAke1JFTU9WRV9JQ0VfU1RSfSAke0tEVU1QX0NPTkZ9ICYmIGV4aXQgMAoKIyBUYXJnZXQgbGluZSBsb29rcyBzb21ldGhpbmcgbGlrZSB0aGlzOgojIEtEVU1QX0NPTU1BTkRMSU5FX0FQUEVORD0iaXJxcG9sbCBucl9jcHVzPTEgLi4uIGhlc3RfZGlzYWJsZSIKIyBVc2Ugc2VkIHRvIG1hdGNoIGV2ZXJ5dGhpbmcgYmV0d2VlbiB0aGUgcXVvdGVzIGFuZCBhcHBlbmQgdGhlIFJFTU9WRV9JQ0VfU1RSIHRvIGl0CiR7U0VEfSAtaSAncy9eS0RVTVBfQ09NTUFORExJTkVfQVBQRU5EPSJbXiJdKi8mICcke1JFTU9WRV9JQ0VfU1RSfScvJyAke0tEVU1QX0NPTkZ9IHx8IGV4aXQgMAo= mode: 448 path: /usr/local/bin/kdump-remove-ice-module.sh", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: \"stable\" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: \"stable\" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: \"Managed\" collection: type: \"vector\"", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines #apiVersion: \"logging.openshift.io/v1\" #kind: ClusterLogForwarder #metadata: name: instance namespace: openshift-logging #spec: outputs: - type: \"kafka\" name: kafka-open url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit - infrastructure labels: label1: test1 label2: test2 label3: test3 label4: test4 name: all-to-default outputRefs: - kafka-open", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* group.ice-dplls=0:f:10:*:ice-dplls.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"USDmcp\" priority: 19 profile: performance-patch", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: false enableOperatorWebhook: false logLevel: 0", "containers: - name: my-sriov-workload-container resources: limits: openshift.io/<resource_name>: \"1\" requests: openshift.io/<resource_name>: \"1\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: \"\" networkNamespace: openshift-sriov-network-operator vlan: \"\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt", "installConfigOverrides: \"{\\\"capabilities\\\":{\\\"baselineCapabilitySet\\\": \\\"None\\\" }}\"", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h", "apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager data: pprof-config.yaml: | disabled: True", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: lvmcluster namespace: openshift-storage annotations: {} spec: {} #example: creating a vg1 volume group leveraging all available disks on the node except the installation disk. storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" #", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: cpu: isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true hardwareTuning: isolatedCpuFreq: 2500000 reservedCpuFreq: 2800000", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name, for example: # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from the [sysctl] section data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* group.ice-dplls=0:f:10:*:ice-dplls.* [service] service.stalld=start,enable service.chronyd=stop,disable", "OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}')", "DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64)", "podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##'", "4.18.0-305.49.1.rt7.121.el8_4.x86_64", "oc debug node/<node_name>", "sh-4.4# uname -r", "4.18.0-305.49.1.rt7.121.el8_4.x86_64", "oc get operatorhub cluster -o yaml", "spec: disableAllDefaultSources: true", "oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.target\\.workload\\.openshift\\.io/management}{\"\\n\"}{end}'", "certified-operators -- {\"effect\": \"PreferredDuringScheduling\"} community-operators -- {\"effect\": \"PreferredDuringScheduling\"} ran-operators 1 redhat-marketplace -- {\"effect\": \"PreferredDuringScheduling\"} redhat-operators -- {\"effect\": \"PreferredDuringScheduling\"}", "oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.workload\\.openshift\\.io/allowed}{\"\\n\"}{end}'", "default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management", "oc get -n openshift-logging ClusterLogForwarder instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: \"2022-07-19T21:51:41Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"1030342\" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open", "oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: \"2022-07-07T18:22:56Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"235796\" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed", "oc get consoles.operator.openshift.io cluster -o jsonpath=\"{ .spec.managementState }\"", "Removed", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# systemctl status chronyd", "● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5)", "PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name)", "oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'", "sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2", "oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP'", "sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00", "oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container", "phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533", "oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath=\"{.spec.disableDrain}{'\\n'}\"", "true", "oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath=\"{.items[*].status.syncStatus}{'\\n'}\"", "Succeeded", "oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml", "apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState status: interfaces: - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: \"8086\" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: \"8086\" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: \"8086\" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: \"8086\" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: \"8086\" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: \"8086\" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: \"8086\" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: \"8086\" vfID: 7", "oc get PerformanceProfile openshift-node-performance-profile -o yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: \"2022-07-19T21:51:31Z\" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: \"33558\" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Available - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Upgradeable - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Progressing - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile", "oc get performanceprofile openshift-node-performance-profile -o jsonpath=\"{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\\n'}{end}\"", "Available -- True Upgradeable -- True Progressing -- False Degraded -- False", "oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: \"2022-07-18T10:33:52Z\" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: \"34024\" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch", "oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}'", "true", "oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION", "Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\"", "oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath=\"{ .data.config\\.yaml }\"", "grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h", "oc get route -n openshift-monitoring alertmanager-main", "oc get route -n openshift-monitoring grafana", "oc get performanceprofile -o jsonpath=\"{ .items[0].spec.cpu.reserved }\"", "0-3", "siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml ├── extra-manifest/ └── custom-manifest └── 01-example-machine-config.yaml", "clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site1-sno-du\" namespace: \"site1-sno-du\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.16\" sshPublicKey: \"<ssh_public_key>\" clusters: - clusterName: \"site1-sno-du\" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml", "- clusterName: \"site1-sno-du\" extraManifests: filter: inclusionDefault: exclude", "clusters: - clusterName: \"site1-sno-du\" extraManifestPath: \"<custom_manifest_folder>\" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml", "siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"cnfdf20\" namespace: \"cnfdf20\" spec: clusters: nodes: - hostname: node6 role: \"worker\" crAnnotations: add: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: true", "get bmh -n <managed-cluster-namespace> <bmh-object> -ojsonpath='{.metadata}' | jq -r '.annotations[\"bmac.agent-install.openshift.io/remove-agent-and-node-on-delete\"]'", "true", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"cnfdf20\" namespace: \"cnfdf20\" spec: clusters: - nodes: - hostName: node6 role: \"worker\" crSuppression: - BareMetalHost", "oc get bmh -n <cluster-ns>", "oc get agent -n <cluster-ns>", "oc get nodes", "apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: common-latest placementBindingDefaults: name: common-latest-placement-binding 1 policyDefaults: namespace: ztp-common placement: labelSelector: matchExpressions: - key: common operator: In values: - \"true\" - key: du-profile operator: In values: - latest remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: common-latest-config-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"1\" manifests: - path: source-crs/ReduceMonitoringFootprint.yaml - path: source-crs/DefaultCatsrc.yaml 2 patches: - metadata: name: redhat-operators-disconnected spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - path: source-crs/DisconnectedICSP.yaml patches: - spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io - name: common-latest-subscriptions-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: 3 - path: source-crs/SriovSubscriptionNS.yaml - path: source-crs/SriovSubscriptionOperGroup.yaml - path: source-crs/SriovSubscription.yaml - path: source-crs/SriovOperatorStatus.yaml - path: source-crs/PtpSubscriptionNS.yaml - path: source-crs/PtpSubscriptionOperGroup.yaml - path: source-crs/PtpSubscription.yaml - path: source-crs/PtpOperatorStatus.yaml - path: source-crs/ClusterLogNS.yaml - path: source-crs/ClusterLogOperGroup.yaml - path: source-crs/ClusterLogSubscription.yaml - path: source-crs/ClusterLogOperatorStatus.yaml - path: source-crs/StorageNS.yaml - path: source-crs/StorageOperGroup.yaml - path: source-crs/StorageSubscription.yaml - path: source-crs/StorageOperatorStatus.yaml", "apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: group-du-sno placementBindingDefaults: name: group-du-sno-placement-binding policyDefaults: namespace: ztp-group placement: labelSelector: matchExpressions: - key: group-du-sno operator: Exists remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: group-du-sno-config-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: '10' manifests: - path: source-crs/PtpConfigSlave-MCP-master.yaml patches: - metadata: null name: du-ptp-slave namespace: openshift-ptp annotations: ran.openshift.io/ztp-deploy-wave: '10' spec: profile: - name: slave interface: USDinterface ptp4lOpts: '-2 -s' phc2sysOpts: '-a -r -n 24' ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: 'true' ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: slave priority: 4 match: - nodeLabel: node-role.kubernetes.io/master", "--- apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: du-upgrade placementBindingDefaults: name: du-upgrade-placement-binding policyDefaults: namespace: ztp-group-du-sno placement: labelSelector: matchExpressions: - key: group-du-sno operator: Exists remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: du-upgrade-operator-catsrc-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"1\" manifests: - path: source-crs/DefaultCatsrc.yaml patches: - metadata: name: redhat-operators spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators:v4.14 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq", "{ \"lastTransitionTime\": \"2022-11-09T07:28:09Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" }", "oc get policies -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m", "export NS=<namespace>", "oc get policy -n USDNS", "oc describe -n openshift-gitops application policies", "Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError", "Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error", "oc get policy -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d", "oc get Placement -n USDNS", "oc get Placement -n USDNS <placement_rule_name> -o yaml", "oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq", "oc get policy -n USDCLUSTER", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Ready\")]}'", "oc delete clustergroupupgrades -n ztp-install USDCLUSTER", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true", "policyDefaults: complianceType: \"mustonlyhave\" policies: - name: config-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"\" manifests: - path: source-crs/SriovOperatorConfig.yaml", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction:", "oc create -f cgu-remove.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get <kind> <changed_cr_name>", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h", "oc get <kind> <changed_cr_name>", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16.1 extract /home/ztp --tar | tar x -C ./out", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true", "- path: source-crs/PerformanceProfile.yaml patches: - spec: # These must be tailored for the specific hardware platform cpu: isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true", "spec: bindingRules: group-du-standard: \"\" mcp: \"worker\"", "example └── acmpolicygenerator ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs 1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml", "apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: group-dev placementBindingDefaults: name: group-dev-placement-binding policyDefaults: namespace: ztp-clusters placement: labelSelector: matchExpressions: - key: dev operator: In values: - \"true\" remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: group-dev-group-dev-cluster-log-ns policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/ClusterLogNS.yaml - name: group-dev-group-dev-cluster-log-operator-group policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/ClusterLogOperGroup.yaml - name: group-dev-group-dev-cluster-log-sub policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/ClusterLogSubscription.yaml - name: group-dev-group-dev-lso-ns policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/StorageNS.yaml - name: group-dev-group-dev-lso-operator-group policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/StorageOperGroup.yaml - name: group-dev-group-dev-lso-sub policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/StorageSubscription.yaml - name: group-dev-group-dev-pao-cat-source policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"1\" manifests: - path: source-crs/PaoSubscriptionCatalogSource.yaml patches: - spec: image: <container_image_url> - name: group-dev-group-dev-pao-ns policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/PaoSubscriptionNS.yaml - name: group-dev-group-dev-pao-sub policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/PaoSubscription.yaml - name: group-dev-group-dev-elasticsearch-ns policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: elasticsearch/ElasticsearchNS.yaml 1 - name: group-dev-group-dev-elasticsearch-operator-group policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: elasticsearch/ElasticsearchOperatorGroup.yaml - name: group-dev-group-dev-apiserver-config policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: custom-crs/apiserver-config.yaml 2 - name: group-dev-group-dev-disable-nic-lldp policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: custom-crs/disable-nic-lldp.yaml", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgu-test.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies", "policyDefaults: evaluationInterval: compliant: 30m noncompliant: 45s", "policies: - name: \"sriov-sub-policy\" manifests: - path: \"SriovSubscription.yaml\" evaluationInterval: compliant: never noncompliant: 10s", "oc get pods -n open-cluster-management-agent-addon", "NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d", "oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb", "2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-config-policy-config\"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-common-compute-1-catalog-policy-config\"}", "apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: group-du-sno-validator-latest placementBindingDefaults: name: group-du-sno-validator-latest-placement-binding policyDefaults: namespace: ztp-group placement: labelSelector: matchExpressions: - key: du-profile operator: In values: - latest - key: group-du-sno operator: Exists - key: ztp-done operator: DoesNotExist remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: group-du-sno-validator-latest-du-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"10000\" evaluationInterval: compliant: 5s manifests: - path: source-crs/validatorCRs/informDuValidator-MCP-master.yaml", "- path: source-crs/PerformanceProfile.yaml patches: - spec: workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false", "- path: source-crs/PerformanceProfile.yaml patches: - spec: workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false", "- path: source-crs/PerformanceProfile.yaml patches: - spec: # workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true # additionalKernelArgs: - # - \"cpufreq.default_governor=schedutil\" 1", "oc get nodes", "oc debug node/<node-name>", "chroot /host", "cat /proc/cmdline", "- path: source-crs/TunedPerformancePatch.yaml patches: - spec: profile: - name: performance-patch data: | # [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1", "- name: subscription-policies policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/StorageLVMOSubscriptionNS.yaml - path: source-crs/StorageLVMOSubscriptionOperGroup.yaml - path: source-crs/StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.16", "- path: source-crs/StorageLVMSubscriptionNS.yaml - path: source-crs/StorageLVMSubscriptionOperGroup.yaml - path: source-crs/StorageLVMSubscription.yaml", "- fileName: StorageLVMCluster.yaml policyName: \"lvms-config\" metadata: name: \"lvms-storage-cluster-config\" spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "- path: source-crs/PtpOperatorConfigForEvent.yaml patches: - metadata: name: default namespace: openshift-ptp annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"", "- path: source-crs/PtpConfigSlave.yaml 1 patches: - metadata: name: \"du-ptp-slave\" spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/master priority: 4 profile: slave profile: - name: \"slave\" # This interface must match the hardware in this group interface: \"ens5f0\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -n 24\" 4 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 ptpClockThreshold: 5 holdOverTimeout: 30 # seconds maxOffsetThreshold: 100 # nano seconds minOffsetThreshold: -100", "Bare Metal Event Relay Operator - path: source-crs/BareMetalEventRelaySubscriptionNS.yaml - path: source-crs/BareMetalEventRelaySubscriptionOperGroup.yaml - path: source-crs/BareMetalEventRelaySubscription.yaml", "- path: source-crs/HardwareEvent.yaml 1 patches: - spec: logLevel: debug nodeSelector: {} transportHost: http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "AMQ Interconnect Operator for fast events - path: source-crs/AmqSubscriptionNS.yaml - path: source-crs/AmqSubscriptionOperGroup.yaml - path: source-crs/AmqSubscription.yaml Bare Metal Event Relay Operator - path: source-crs/BareMetalEventRelaySubscriptionNS.yaml - path: source-crs/BareMetalEventRelaySubscriptionOperGroup.yaml - path: source-crs/BareMetalEventRelaySubscription.yaml", "- path: source-crs/AmqInstance.yaml", "- path: HardwareEvent.yaml patches: nodeSelector: {} transportHost: \"amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local\" 1 logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota", "butane storage.bu", "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}", "[...] spec: clusters: - nodes: - ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } [...]", "oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]", "\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"", "oc debug node/my-sno-node", "chroot /host", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000", "sourceFiles: # storage class - fileName: StorageClass.yaml policyName: \"sc-for-image-registry\" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: \"100\" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: \"pvc-for-image-registry\" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: \"pv-for-image-registry\" metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" - fileName: ImageRegistryConfig.yaml policyName: \"config-for-image-registry\" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: storage: pvc: claim: \"image-registry-pvc\"", "cluster=<managed_cluster_name>", "oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster", "oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster", "oc get image.config.openshift.io cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2021-10-08T19:02:39Z\" generation: 5 name: cluster resourceVersion: \"688678648\" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice", "oc get pv image-registry-sc", "oc get pods -n openshift-image-registry | grep registry*", "cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d", "oc debug node/sno-1.example.com", "sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom", "imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "OCP_RELEASE_NUMBER=<release_version>", "ARCHITECTURE=<cluster_architecture> 1", "DIGEST=\"USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')\"", "DIGEST_ALGO=\"USD{DIGEST%%:*}\"", "DIGEST_ENCODED=\"USD{DIGEST#*:}\"", "SIGNATURE_BASE64=USD(curl -s \"https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1\" | base64 -w0 && echo)", "cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF", "curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.16 -o ~/upgrade-graph_stable-4.16", "apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: du-upgrade placementBindingDefaults: name: du-upgrade-placement-binding policyDefaults: namespace: ztp-group-du-sno placement: labelSelector: matchExpressions: - key: group-du-sno operator: Exists remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: du-upgrade-platform-upgrade policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"100\" manifests: - path: source-crs/ClusterVersion.yaml 1 patches: - metadata: name: version spec: channel: stable-4.16 desiredUpdate: version: 4.16.4 upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.16 status: history: - state: Completed version: 4.16.4 - name: du-upgrade-platform-upgrade-prep policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"1\" manifests: - path: source-crs/ImageSignature.yaml 2 - path: source-crs/DisconnectedICSP.yaml patches: - metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "oc get policies -A | grep platform-upgrade", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: du-upgrade placementBindingDefaults: name: du-upgrade-placement-binding policyDefaults: namespace: ztp-group-du-sno placement: labelSelector: matchExpressions: - key: group-du-sno operator: Exists remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: du-upgrade-operator-catsrc-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"1\" manifests: - path: source-crs/DefaultCatsrc.yaml patches: - metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.16 1 updateStrategy: 2 registryPoll: interval: 1h status: connectionState: lastObservedState: READY 3", "apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: du-upgrade placementBindingDefaults: name: du-upgrade-placement-binding policyDefaults: namespace: ztp-group-du-sno placement: labelSelector: matchExpressions: - key: group-du-sno operator: Exists remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: du-upgrade-fec-catsrc-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"1\" manifests: - path: source-crs/DefaultCatsrc.yaml patches: - metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - name: du-upgrade-subscriptions-fec-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/AcceleratorsSubscription.yaml patches: - spec: channel: stable source: certified-operators", "oc get policies -A | grep -E \"catsrc-policy|subscription\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1", "oc apply -f cgu-operator-upgrade-prep.yml", "oc get policies -A | grep -E \"catsrc-policy\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-operator-upgrade.yml", "oc get policy common-subscriptions-policy -n <policy_namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'", "oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq", "[ { \"lastTransitionTime\": \"2022-03-08T20:49:08.000Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-03-08T20:55:30.000Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" } ]", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "manifests: - path: source-crs/DefaultCatsrc.yaml patches: - metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - path: source-crs/DefaultCatsrc.yaml patches: - metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olm/redhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace spec: source: redhat-operators-disconnected-v2 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true", "oc apply -f cgu-platform-operator-upgrade-prep.yml", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-operator-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get jobs,pods -n openshift-talm-pre-cache", "oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- name: group-du-sno-pg-subscriptions-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"2\" manifests: - path: source-crs/PaoSubscriptionNS.yaml - path: source-crs/PaoSubscriptionOperGroup.yaml - path: source-crs/PaoSubscription.yaml", "oc get policy -n ztp-common common-subscriptions-policy", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: exampleconfig-ns spec: overrides: 1 platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable spaceRequired: 30 Gi 2 excludePrecachePatterns: 3 - aws - vsphere additionalImages: 4 - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu spec: preCaching: true 1 preCachingConfigRef: name: exampleconfig 2 namespace: exampleconfig-ns 3", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: default 1 spec: [...] spaceRequired: 30Gi 2 additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu namespace: default spec: clusters: - sno1 - sno2 preCaching: true preCachingConfigRef: - name: exampleconfig namespace: default managedPolicies: - du-upgrade-platform-upgrade - du-upgrade-operator-catsrc-policy - common-subscriptions-policy remediationStrategy: timeout: 240", "oc apply -f cgu.yaml", "oc get cgu <cgu_name> -n <cgu_namespace> -oyaml", "precaching: spec: platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: - aws - vsphere additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09 spaceRequired: \"30\" status: sno1: Starting sno2: Starting", "- lastTransitionTime: \"2023-01-01T00:00:01Z\" message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClusterSelected - lastTransitionTime: \"2023-01-01T00:00:02Z\" message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - lastTransitionTime: \"2023-01-01T00:00:03Z\" message: Precaching spec is valid and consistent reason: PrecacheSpecIsWellFormed status: \"True\" type: PrecacheSpecValid - lastTransitionTime: \"2023-01-01T00:00:04Z\" message: Precaching in progress for 1 clusters reason: InProgress status: \"False\" type: PrecachingSucceeded", "Type: \"PrecacheSpecValid\" Status: False, Reason: \"PrecacheSpecIncomplete\" Message: \"Precaching spec is incomplete: failed to get PreCachingConfig resource due to PreCachingConfig.ran.openshift.io \"<pre-caching_cr_name>\" not found\"", "oc get jobs -n openshift-talo-pre-cache", "NAME COMPLETIONS DURATION AGE pre-cache 0/1 1s 1s", "oc describe pod pre-cache -n openshift-talo-pre-cache", "Type Reason Age From Message Normal SuccesfulCreate 19s job-controller Created pod: pre-cache-abcd1", "oc logs -f pre-cache-abcd1 -n openshift-talo-pre-cache", "oc describe pod pre-cache -n openshift-talo-pre-cache", "Type Reason Age From Message Normal SuccesfulCreate 5m19s job-controller Created pod: pre-cache-abcd1 Normal Completed 19s job-controller Job completed", "oc debug node/cnfdf00.example.lab", "chroot /host/", "sudo podman images | grep <operator_name>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: \"46666836\" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: \"\" 1 deleteClusterLabels: ztp-running: \"\" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: \"\" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"common-latest\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" 1 du-profile: \"latest\" sourceFiles: 2 - fileName: SriovSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscription.yaml policyName: \"subscriptions-policy\" - fileName: SriovOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: PtpOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogNS.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogSubscription.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: StorageNS.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: StorageSubscription.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: DefaultCatsrc.yaml 3 policyName: \"config-policy\" 4 metadata: name: redhat-operators-disconnected spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: \"config-policy\" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno\" namespace: \"ztp-group\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -n 24\"", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq", "{ \"lastTransitionTime\": \"2022-11-09T07:28:09Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" }", "oc get policies -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m", "export NS=<namespace>", "oc get policy -n USDNS", "oc describe -n openshift-gitops application policies", "Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError", "Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error", "oc get policy -n USDCLUSTER", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d", "oc get PlacementRule -n USDNS", "oc get PlacementRule -n USDNS <placement_rule_name> -o yaml", "oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq", "oc get policy -n USDCLUSTER", "export CLUSTER=<clusterName>", "oc get clustergroupupgrades -n ztp-install USDCLUSTER", "oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Ready\")]}'", "oc delete clustergroupupgrades -n ztp-install USDCLUSTER", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true", "- fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-remove namespace: default spec: managedPolicies: - ztp-group.group-du-sno-config-policy enable: false clusters: - spoke1 - spoke2 remediationStrategy: maxConcurrency: 2 timeout: 240 batchTimeoutAction:", "oc create -f cgu-remove.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-remove --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get <kind> <changed_cr_name>", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-ztp-group.group-du-sno-config-policy enforce 17m default ztp-group.group-du-sno-config-policy inform NonCompliant 15h", "oc get <kind> <changed_cr_name>", "mkdir -p ./out", "podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16.1 extract /home/ztp --tar | tar x -C ./out", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true", "spec: bindingRules: group-du-standard: \"\" mcp: \"worker\"", "example └── policygentemplates ├── dev.yaml ├── kustomization.yaml ├── mec-edge-sno1.yaml ├── sno.yaml └── source-crs 1 ├── PaoCatalogSource.yaml ├── PaoSubscription.yaml ├── custom-crs | ├── apiserver-config.yaml | └── disable-nic-lldp.yaml └── elasticsearch ├── ElasticsearchNS.yaml └── ElasticsearchOperatorGroup.yaml", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-dev\" namespace: \"ztp-clusters\" spec: bindingRules: dev: \"true\" mcp: \"master\" sourceFiles: # These policies/CRs come from the internal container Image #Cluster Logging - fileName: ClusterLogNS.yaml remediationAction: inform policyName: \"group-dev-cluster-log-ns\" - fileName: ClusterLogOperGroup.yaml remediationAction: inform policyName: \"group-dev-cluster-log-operator-group\" - fileName: ClusterLogSubscription.yaml remediationAction: inform policyName: \"group-dev-cluster-log-sub\" #Local Storage Operator - fileName: StorageNS.yaml remediationAction: inform policyName: \"group-dev-lso-ns\" - fileName: StorageOperGroup.yaml remediationAction: inform policyName: \"group-dev-lso-operator-group\" - fileName: StorageSubscription.yaml remediationAction: inform policyName: \"group-dev-lso-sub\" #These are custom local polices that come from the source-crs directory in the git repo # Performance Addon Operator - fileName: PaoSubscriptionNS.yaml remediationAction: inform policyName: \"group-dev-pao-ns\" - fileName: PaoSubscriptionCatalogSource.yaml remediationAction: inform policyName: \"group-dev-pao-cat-source\" spec: image: <container_image_url> - fileName: PaoSubscription.yaml remediationAction: inform policyName: \"group-dev-pao-sub\" #Elasticsearch Operator - fileName: elasticsearch/ElasticsearchNS.yaml 1 remediationAction: inform policyName: \"group-dev-elasticsearch-ns\" - fileName: elasticsearch/ElasticsearchOperatorGroup.yaml remediationAction: inform policyName: \"group-dev-elasticsearch-operator-group\" #Custom Resources - fileName: custom-crs/apiserver-config.yaml 2 remediationAction: inform policyName: \"group-dev-apiserver-config\" - fileName: custom-crs/disable-nic-lldp.yaml remediationAction: inform policyName: \"group-dev-disable-nic-lldp\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: custom-source-cr namespace: ztp-clusters spec: managedPolicies: - group-dev-config-policy enable: true clusters: - cluster1 remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgu-test.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-clusters custom-source-cr 6s InProgress Remediating non-compliant policies ztp-install cluster1 19h Completed All clusters are compliant with all the managed policies", "spec: evaluationInterval: compliant: 30m noncompliant: 20s", "spec: sourceFiles: - fileName: SriovSubscription.yaml policyName: \"sriov-sub-policy\" evaluationInterval: compliant: never noncompliant: 10s", "oc get pods -n open-cluster-management-agent-addon", "NAME READY STATUS RESTARTS AGE config-policy-controller-858b894c68-v4xdb 1/1 Running 22 (5d8h ago) 10d", "oc logs -n open-cluster-management-agent-addon config-policy-controller-858b894c68-v4xdb", "2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-config-policy-config\"} 2022-05-10T15:10:25.280Z info configuration-policy-controller controllers/configurationpolicy_controller.go:166 Skipping the policy evaluation due to the policy not reaching the evaluation interval {\"policy\": \"compute-1-common-compute-1-catalog-policy-config\"}", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno-validator\" 1 namespace: \"ztp-group\" 2 spec: bindingRules: group-du-sno: \"\" 3 bindingExcludedRules: ztp-done: \"\" 4 mcp: \"master\" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: \"du-policy\" 7", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: # spec: # workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: # spec: # workloadHints: realTime: true highPowerConsumption: true perPodPowerManagement: false", "- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: # spec: # workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true # additionalKernelArgs: - # - \"cpufreq.default_governor=schedutil\" 1", "oc get nodes", "oc debug node/<node-name>", "chroot /host", "cat /proc/cmdline", "- fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" spec: profile: - name: performance-patch data: | # [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct=<x> 1", "- fileName: StorageLVMOSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMOSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMOSubscription.yaml spec: name: lvms-operator channel: stable-4.16 policyName: subscription-policies", "- fileName: StorageLVMSubscriptionNS.yaml policyName: subscription-policies - fileName: StorageLVMSubscriptionOperGroup.yaml policyName: subscription-policies - fileName: StorageLVMSubscription.yaml policyName: subscription-policies", "- fileName: StorageLVMCluster.yaml policyName: \"lvms-config\" spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\" spec: daemonNodeSelector: {} ptpEventConfig: enableEventPublisher: true transportHost: http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043", "- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 # seconds maxOffsetThreshold: 100 # nano seconds minOffsetThreshold: -100", "Bare Metal Event Relay Operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: HardwareEvent.yaml 1 policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"http://hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043\" logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "AMQ Interconnect Operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\" Bare Metal Event Relay Operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"", "- fileName: AmqInstance.yaml policyName: \"config-policy\"", "- path: HardwareEvent.yaml patches: nodeSelector: {} transportHost: \"amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local\" 1 logLevel: \"info\"", "oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"", "variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota", "butane storage.bu", "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}", "[...] spec: clusters: - nodes: - ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# # Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } [...]", "oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]", "\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"", "oc debug node/my-sno-node", "chroot /host", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000", "sourceFiles: # storage class - fileName: StorageClass.yaml policyName: \"sc-for-image-registry\" metadata: name: image-registry-sc annotations: ran.openshift.io/ztp-deploy-wave: \"100\" 1 # persistent volume claim - fileName: StoragePVC.yaml policyName: \"pvc-for-image-registry\" metadata: name: image-registry-pvc namespace: openshift-image-registry annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: image-registry-sc volumeMode: Filesystem # persistent volume - fileName: ImageRegistryPV.yaml 2 policyName: \"pv-for-image-registry\" metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" - fileName: ImageRegistryConfig.yaml policyName: \"config-for-image-registry\" complianceType: musthave metadata: annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: storage: pvc: claim: \"image-registry-pvc\"", "cluster=<managed_cluster_name>", "oc get secret -n USDcluster USDcluster-admin-password -o jsonpath='{.data.password}' | base64 -d > kubeadmin-password-USDcluster", "oc get secret -n USDcluster USDcluster-admin-kubeconfig -o jsonpath='{.data.kubeconfig}' | base64 -d > kubeconfig-USDcluster && export KUBECONFIG=./kubeconfig-USDcluster", "oc get image.config.openshift.io cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2021-10-08T19:02:39Z\" generation: 5 name: cluster resourceVersion: \"688678648\" uid: 0406521b-39c0-4cda-ba75-873697da75a4 spec: additionalTrustedCA: name: acm-ice", "oc get pv image-registry-sc", "oc get pods -n openshift-image-registry | grep registry*", "cluster-image-registry-operator-68f5c9c589-42cfg 1/1 Running 0 8d image-registry-5f8987879-6nx6h 1/1 Running 0 8d", "oc debug node/sno-1.example.com", "sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 446.6G 0 disk |-sda1 8:1 0 1M 0 part |-sda2 8:2 0 127M 0 part |-sda3 8:3 0 384M 0 part /boot |-sda4 8:4 0 336.3G 0 part /sysroot `-sda5 8:5 0 100.1G 0 part /var/imageregistry 1 sdb 8:16 0 446.6G 0 disk sr0 11:0 1 104M 0 rom", "imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "OCP_RELEASE_NUMBER=<release_version>", "ARCHITECTURE=<cluster_architecture> 1", "DIGEST=\"USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')\"", "DIGEST_ALGO=\"USD{DIGEST%%:*}\"", "DIGEST_ENCODED=\"USD{DIGEST#*:}\"", "SIGNATURE_BASE64=USD(curl -s \"https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1\" | base64 -w0 && echo)", "cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF", "curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.16 -o ~/upgrade-graph_stable-4.16", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: \"platform-upgrade-prep\" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: \"platform-upgrade-prep\" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: \"platform-upgrade\" metadata: name: version spec: channel: \"stable-4.16\" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.16 desiredUpdate: version: 4.16.4 status: history: - version: 4.16.4 state: \"Completed\"", "oc get policies -A | grep platform-upgrade", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v4.16 1 updateStrategy: 2 registryPoll: interval: 1h status: connectionState: lastObservedState: READY 3", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: # - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"fec-catsrc-policy\" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: \"subscriptions-fec-policy\" spec: channel: \"stable\" source: certified-operators", "oc get policies -A | grep -E \"catsrc-policy|subscription\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1", "oc apply -f cgu-operator-upgrade-prep.yml", "oc get policies -A | grep -E \"catsrc-policy\"", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-operator-upgrade.yml", "oc get policy common-subscriptions-policy -n <policy_namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'", "oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq", "[ { \"lastTransitionTime\": \"2022-03-08T20:49:08.000Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-03-08T20:55:30.000Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" } ]", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators-disconnected:v{product-version} updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators-disconnected-v2 1 spec: displayName: Red Hat Operators Catalog v2 2 image: registry.example.com:5000/olm/redhat-operators-disconnected:<version> 3 updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: operator-subscription namespace: operator-namspace spec: source: redhat-operators-disconnected-v2 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true", "oc apply -f cgu-platform-operator-upgrade-prep.yml", "oc get policies --all-namespaces", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false", "oc apply -f cgu-platform-operator-upgrade.yml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge", "oc get jobs,pods -n openshift-talm-pre-cache", "oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge", "oc get policies --all-namespaces", "- fileName: PaoSubscriptionNS.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave", "oc get policy -n ztp-common common-subscriptions-policy", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: exampleconfig-ns spec: overrides: 1 platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable spaceRequired: 30 Gi 2 excludePrecachePatterns: 3 - aws - vsphere additionalImages: 4 - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu spec: preCaching: true 1 preCachingConfigRef: name: exampleconfig 2 namespace: exampleconfig-ns 3", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: exampleconfig namespace: default 1 spec: [...] spaceRequired: 30Gi 2 additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu namespace: default spec: clusters: - sno1 - sno2 preCaching: true preCachingConfigRef: - name: exampleconfig namespace: default managedPolicies: - du-upgrade-platform-upgrade - du-upgrade-operator-catsrc-policy - common-subscriptions-policy remediationStrategy: timeout: 240", "oc apply -f cgu.yaml", "oc get cgu <cgu_name> -n <cgu_namespace> -oyaml", "precaching: spec: platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: - aws - vsphere additionalImages: - quay.io/exampleconfig/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e1ef - quay.io/exampleconfig/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adfaef - quay.io/exampleconfig/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfsa09 spaceRequired: \"30\" status: sno1: Starting sno2: Starting", "- lastTransitionTime: \"2023-01-01T00:00:01Z\" message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClusterSelected - lastTransitionTime: \"2023-01-01T00:00:02Z\" message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - lastTransitionTime: \"2023-01-01T00:00:03Z\" message: Precaching spec is valid and consistent reason: PrecacheSpecIsWellFormed status: \"True\" type: PrecacheSpecValid - lastTransitionTime: \"2023-01-01T00:00:04Z\" message: Precaching in progress for 1 clusters reason: InProgress status: \"False\" type: PrecachingSucceeded", "Type: \"PrecacheSpecValid\" Status: False, Reason: \"PrecacheSpecIncomplete\" Message: \"Precaching spec is incomplete: failed to get PreCachingConfig resource due to PreCachingConfig.ran.openshift.io \"<pre-caching_cr_name>\" not found\"", "oc get jobs -n openshift-talo-pre-cache", "NAME COMPLETIONS DURATION AGE pre-cache 0/1 1s 1s", "oc describe pod pre-cache -n openshift-talo-pre-cache", "Type Reason Age From Message Normal SuccesfulCreate 19s job-controller Created pod: pre-cache-abcd1", "oc logs -f pre-cache-abcd1 -n openshift-talo-pre-cache", "oc describe pod pre-cache -n openshift-talo-pre-cache", "Type Reason Age From Message Normal SuccesfulCreate 5m19s job-controller Created pod: pre-cache-abcd1 Normal Completed 19s job-controller Job completed", "oc debug node/cnfdf00.example.lab", "chroot /host/", "sudo podman images | grep <operator_name>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: \"46666836\" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: \"\" 1 deleteClusterLabels: ztp-running: \"\" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: \"\" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240", "argocd.argoproj.io/sync-options: Replace=true", "apiVersion: v1 kind: ConfigMap metadata: name: group-hardware-types-configmap namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: # SriovNetworkNodePolicy.yaml hardware-type-1-sriov-node-policy-pfNames-1: \"[\\\"ens5f0\\\"]\" hardware-type-1-sriov-node-policy-pfNames-2: \"[\\\"ens7f0\\\"]\" # PerformanceProfile.yaml hardware-type-1-cpu-isolated: \"2-31,34-63\" hardware-type-1-cpu-reserved: \"0-1,32-33\" hardware-type-1-hugepages-default: \"1G\" hardware-type-1-hugepages-size: \"1G\" hardware-type-1-hugepages-count: \"32\"", "apiVersion: v1 kind: ConfigMap metadata: name: group-zones-configmap namespace: ztp-group data: # ClusterLogForwarder.yaml zone-1-cluster-log-fwd-outputs: \"[{\\\"type\\\":\\\"kafka\\\", \\\"name\\\":\\\"kafka-open\\\", \\\"url\\\":\\\"tcp://10.46.55.190:9092/test\\\"}]\" zone-1-cluster-log-fwd-pipelines: \"[{\\\"inputRefs\\\":[\\\"audit\\\", \\\"infrastructure\\\"], \\\"labels\\\": {\\\"label1\\\": \\\"test1\\\", \\\"label2\\\": \\\"test2\\\", \\\"label3\\\": \\\"test3\\\", \\\"label4\\\": \\\"test4\\\"}, \\\"name\\\": \\\"all-to-default\\\", \\\"outputRefs\\\": [\\\"kafka-open\\\"]}]\"", "apiVersion: v1 kind: ConfigMap metadata: name: site-data-configmap namespace: ztp-group data: # SriovNetwork.yaml du-sno-1-zone-1-sriov-network-vlan-1: \"140\" du-sno-1-zone-1-sriov-network-vlan-2: \"150\"", "oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{\"metadata\":{\"labels\":{\"hardware-type\": \"hardware-type-1\", \"group-du-sno-zone\": \"zone-1\"}}}'", "--- apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: group-du-sno-pgt placementBindingDefaults: name: group-du-sno-pgt-placement-binding policyDefaults: placement: labelSelector: matchExpressions: - key: group-du-sno-zone operator: In values: - zone-1 - key: hardware-type operator: In values: - hardware-type-1 remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: group-du-sno-pgt-group-du-sno-cfg-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"10\" manifests: - path: source-crs/ClusterLogForwarder.yaml patches: - spec: outputs: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-outputs\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-pipelines\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' - path: source-crs/PerformanceProfile-MCP-master.yaml patches: - metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-isolated\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' reserved: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-reserved\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-default\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' pages: - count: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-count\" (index .ManagedClusterLabels \"hardware-type\")) | toInt hub}}' size: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-size\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' realTimeKernel: enabled: true - name: group-du-sno-pgt-group-du-sno-sriov-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"100\" manifests: - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-1\" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-1\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-2\" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-2\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: group-du-sno-pgt namespace: ztp-group spec: bindingRules: # These policies will correspond to all clusters with these labels group-du-sno-zone: \"zone-1\" hardware-type: \"hardware-type-1\" mcp: \"master\" sourceFiles: - fileName: ClusterLogForwarder.yaml # wave 10 policyName: \"group-du-sno-cfg-policy\" spec: outputs: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-outputs\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-pipelines\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' - fileName: PerformanceProfile.yaml # wave 10 policyName: \"group-du-sno-cfg-policy\" metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-isolated\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' reserved: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-reserved\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-default\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' pages: - size: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-size\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' count: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-count\" (index .ManagedClusterLabels \"hardware-type\")) | toInt hub}}' realTimeKernel: enabled: true - fileName: SriovNetwork.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-1\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-1\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-2\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-2\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh", "oc delete policy <policy_name> -n <policy_namespace>", "oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update=\"1\"", "oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgr-example.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: \"stable\" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f talm-subscription.yaml", "oc get csv -n openshift-operators", "NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.16.x Topology Aware Lifecycle Manager 4.16.x Succeeded", "oc get deploy -n openshift-operators", "NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s", "spec remediationStrategy: maxConcurrency: 1 timeout: 240", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: 1 addClusterLabels: upgrade-done: \"\" deleteClusterLabels: upgrade-running: \"\" deleteObjects: true beforeEnable: 2 addClusterLabels: upgrade-running: \"\" clusters: 3 - spoke1 enable: false 4 managedPolicies: 5 - talm-policy preCaching: false remediationStrategy: 6 canaries: 7 - spoke1 maxConcurrency: 2 8 timeout: 240 clusterLabelSelectors: 9 - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: 10 status: 11 computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected 12 - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated 13 - lastTransitionTime: '2022-11-18T16:37:16Z' message: Not enabled reason: NotEnabled status: 'False' type: Progressing managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status:", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c Spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} clusters: - spoke1 enable: true managedPolicies: - talm-policy preCaching: true remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchExpressions: - key: label1 operator: In values: - value1a - value1b batchTimeoutAction: status: clusters: - name: spoke1 state: complete computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Remediating non-compliant policies reason: InProgress status: 'True' type: Progressing 1 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - - spoke2 - spoke3 status: currentBatch: 2 currentBatchRemediationProgress: spoke2: state: Completed spoke3: policyIndex: 0 state: InProgress currentBatchStartedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 - spoke4 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 1 clusters: - name: spoke1 state: complete - name: spoke4 state: complete conditions: - message: All selected clusters are valid reason: ClusterSelectionCompleted status: \"True\" type: ClustersSelected - message: Completed validation reason: ValidationCompleted status: \"True\" type: Validated - message: All clusters are compliant with all the managed policies reason: Completed status: \"False\" type: Progressing 2 - message: All clusters are compliant with all the managed policies reason: Completed status: \"True\" type: Succeeded 3 managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 - - spoke4 status: completedAt: '2022-11-18T16:27:16Z' startedAt: '2022-11-18T16:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: creationTimestamp: '2022-11-18T16:27:15Z' finalizers: - ran.openshift.io/cleanup-finalizer generation: 1 name: talm-cgu namespace: talm-namespace resourceVersion: '40451823' uid: cca245a5-4bca-45fa-89c0-aa6af81a596c spec: actions: afterCompletion: deleteObjects: true beforeEnable: {} clusters: - spoke1 - spoke2 enable: true managedPolicies: - talm-policy preCaching: false remediationStrategy: maxConcurrency: 2 timeout: 240 status: clusters: - name: spoke1 state: complete - currentPolicy: 1 name: talm-policy status: NonCompliant name: spoke2 state: timedout computedMaxConcurrency: 2 conditions: - lastTransitionTime: '2022-11-18T16:27:15Z' message: All selected clusters are valid reason: ClusterSelectionCompleted status: 'True' type: ClustersSelected - lastTransitionTime: '2022-11-18T16:27:15Z' message: Completed validation reason: ValidationCompleted status: 'True' type: Validated - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Progressing - lastTransitionTime: '2022-11-18T16:37:16Z' message: Policy remediation took too long reason: TimedOut status: 'False' type: Succeeded 2 managedPoliciesForUpgrade: - name: talm-policy namespace: talm-namespace managedPoliciesNs: talm-policy: talm-namespace remediationPlan: - - spoke1 - spoke2 status: startedAt: '2022-11-18T16:27:15Z' completedAt: '2022-11-18T20:27:15Z'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}", "oc apply -f <name>.yaml", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> --type merge -p '{\"spec\":{\"enable\":true}}'", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: \"False\" type: Ready managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: ocp-4.4.16.4 namespace: platform-upgrade spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: upgrade spec: namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: config.openshift.io/v1 kind: ClusterVersion metadata: name: version spec: channel: stable-4.16 desiredUpdate: version: 4.4.16.4 upstream: https://api.openshift.com/api/upgrades_info/v1/graph status: history: - state: Completed version: 4.4.16.4 remediationAction: inform severity: low remediationAction: inform", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown 1", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 batchTimeoutAction: 5", "oc create -f cgu-1.yaml", "oc get cgu --all-namespaces", "NAMESPACE NAME AGE STATE DETAILS default cgu-1 8m55 NotEnabled Not Enabled", "oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq", "{ \"computedMaxConcurrency\": 2, \"conditions\": [ { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Not enabled\", 1 \"reason\": \"NotEnabled\", \"status\": \"False\", \"type\": \"Progressing\" } ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": {} }", "oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 --patch '{\"spec\":{\"enable\":true}}' --type=merge", "oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq", "{ \"computedMaxConcurrency\": 2, \"conditions\": [ 1 { \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"All selected clusters are valid\", \"reason\": \"ClusterSelectionCompleted\", \"status\": \"True\", \"type\": \"ClustersSelected\" }, { \"lastTransitionTime\": \"2022-02-25T15:33:07Z\", \"message\": \"Completed validation\", \"reason\": \"ValidationCompleted\", \"status\": \"True\", \"type\": \"Validated\" }, { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"Remediating non-compliant policies\", \"reason\": \"InProgress\", \"status\": \"True\", \"type\": \"Progressing\" } ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": { \"currentBatch\": 1, \"currentBatchRemediationProgress\": { \"spoke1\": { \"policyIndex\": 1, \"state\": \"InProgress\" }, \"spoke2\": { \"policyIndex\": 1, \"state\": \"InProgress\" } }, \"currentBatchStartedAt\": \"2022-02-25T15:54:16Z\", \"startedAt\": \"2022-02-25T15:54:16Z\" } }", "get policies -A", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE spoke1 default.policy1-common-cluster-version-policy enforce Compliant 18m spoke1 default.policy2-common-nto-sub-policy enforce NonCompliant 18m spoke2 default.policy1-common-cluster-version-policy enforce Compliant 18m spoke2 default.policy2-common-nto-sub-policy enforce NonCompliant 18m spoke5 default.policy3-common-ptp-sub-policy inform NonCompliant 18m spoke5 default.policy4-common-sriov-sub-policy inform NonCompliant 18m spoke6 default.policy3-common-ptp-sub-policy inform NonCompliant 18m spoke6 default.policy4-common-sriov-sub-policy inform NonCompliant 18m default policy1-common-ptp-sub-policy inform Compliant 18m default policy2-common-sriov-sub-policy inform NonCompliant 18m default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m", "export KUBECONFIG=<cluster_kubeconfig_absolute_path>", "oc get subs -A | grep -i <subscription_name>", "NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.16.5 True True 43s Working towards 4.4.16.7: 71 of 735 done (9% complete)", "oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath=\"{.status}\"", "oc get installplan -n <subscription_namespace>", "NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1", "oc get csv -n <operator_namespace>", "NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded", "oc adm release info <ocp-version>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-group-upgrade-overrides data: excludePrecachePatterns: | azure 1 aws vsphere alibaba", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f clustergroupupgrades-group-du.yaml", "oc get cgu -A", "NAMESPACE NAME AGE STATE DETAILS ztp-group-du-sno du-upgrade-4918 10s InProgress Precaching is required and not done 1", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "{ \"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is required and not done\", \"reason\": \"InProgress\", \"status\": \"False\", \"type\": \"PrecachingSucceeded\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:34Z\", \"message\": \"Pre-caching spec is valid and consistent\", \"reason\": \"PrecacheSpecIsWellFormed\", \"status\": \"True\", \"type\": \"PrecacheSpecValid\" } ], \"precaching\": { \"clusters\": [ \"cnfdb1\" 1 \"cnfdb2\" ], \"spec\": { \"platformImage\": \"image.example.io\"}, \"status\": { \"cnfdb1\": \"Active\" \"cnfdb2\": \"Succeeded\"} } }", "oc get jobs,pods -n openshift-talo-pre-cache", "NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s", "oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'", "\"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:30:41Z\", \"message\": \"The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies\", \"reason\": \"UpgradeCompleted\", \"status\": \"True\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:28:57Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingSucceeded\" 1 }", "oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>", "oc apply -f <ClusterGroupUpgradeCR_YAML>", "oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'", "[\"group-du-sno-validator-du-validator-policy\", \"policy2-common-nto-sub-policy\", \"policy3-common-ptp-sub-policy\"]", "oc get policies --all-namespaces", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h", "oc get policies --all-namespaces", "NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h", "oc get managedclusters", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h", "oc get pod -n openshift-operators", "NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m", "oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager", "ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem", "oc get managedclusters", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2", "oc get managedcluster --selector=upgrade=true 1", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h", "spec: remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 clusterLabelSelectors: - matchLabels: upgrade: true", "oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'", "[\"spoke1\", \"spoke3\"]", "oc get managedcluster --selector=upgrade=true", "NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h", "oc get jobs,pods -n openshift-talo-pre-cache", "oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'", "{\"maxConcurrency\":2, \"timeout\":240}", "oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'", "2", "oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'", "{\"lastTransitionTime\":\"2022-02-17T22:25:28Z\", \"message\":\"Missing managed policies:[policyList]\", \"reason\":\"NotAllManagedPoliciesExist\", \"status\":\"False\", \"type\":\"Validated\"}", "oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'", "[[\"spoke2\", \"spoke3\"]]", "oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager", "ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem", "oc get pods -n openshift-talo-pre-cache", "oc logs -n openshift-talo-pre-cache <pod name>", "oc describe pod -n openshift-talo-pre-cache <pod name>", "oc describe job -n openshift-talo-pre-cache pre-cache", "oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq", "{\"daemonNodeSelector\":{\"node-role.kubernetes.io/master\":\"\"}} 1", "oc get sriovoperatorconfig/default -n openshift-sriov-network-operator -ojsonpath='{.spec}' | jq", "{\"configDaemonNodeSelector\":{\"node-role.kubernetes.io/worker\":\"\"},\"disableDrain\":false,\"enableInjector\":true,\"enableOperatorWebhook\":true} 1", "spec: - fileName: PtpOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" - fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: example-sno-workers placementBindingDefaults: name: example-sno-workers-placement-binding policyDefaults: namespace: example-sno placement: labelSelector: matchExpressions: - key: sites operator: In values: - example-sno 1 remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: example-sno-workers-config-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"10\" manifests: - path: source-crs/MachineConfigGeneric.yaml 2 patches: - metadata: labels: machineconfiguration.openshift.io/role: worker 3 name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - path: source-crs/PerformanceProfile-MCP-worker.yaml patches: - metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: 4-47 reserved: 0-3 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G realTimeKernel: enabled: true - path: source-crs/TunedPerformancePatch-MCP-worker.yaml patches: - metadata: name: performance-patch-worker spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch-worker recommend: - profile: performance-patch-worker", "cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-sno-workers\" namespace: \"example-sno\" spec: bindingRules: sites: \"example-sno\" 1 mcp: \"worker\" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: \"config-policy\" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: \"4-47\" reserved: \"0-3\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker", "cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF", "nodes: - hostName: \"example-node2.example.com\" role: \"worker\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node2-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up macAddress: \"AA:BB:CC:DD:EE:11\" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "apiVersion: v1 data: password: \"password\" username: \"username\" kind: Secret metadata: name: \"example-node2-bmh-secret\" namespace: example-sno type: Opaque", "oc get ppimg -n example-sno", "NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated", "oc get bmh -n example-sno", "NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1", "oc get agent -n example-sno --watch", "NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done", "oc get managedclusterinfo/example-sno -n example-sno -o jsonpath='{range .status.nodeList[*]}{.name}{\"\\t\"}{.conditions}{\"\\t\"}{.labels}{\"\\n\"}{end}'", "example-sno [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/master\":\"\",\"node-role.kubernetes.io/worker\":\"\"} example-node2 [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/worker\":\"\"}", "podman pull quay.io/openshift-kni/telco-ran-tools:latest", "podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v", "factory-precaching-cli version 20221018.120852+main.feecf17", "curl --globoff -H \"Content-Type: application/json\" -H \"Accept: application/json\" -k -X GET --user USD{username_password} https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool", "curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Image\": \"http://[USDHTTPd_IP]/RHCOS-live.iso\"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia", "curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Boot\":{ \"BootSourceOverrideEnabled\": \"Once\", \"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk", "wipefs -a /dev/nvme0n1", "/dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa", "podman run -v /dev:/dev --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli partition \\ 1 -d /dev/nvme0n1 \\ 2 -s 250 3", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:3 0 250G 0 part", "gdisk -l /dev/nvme0n1", "GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data", "lsblk -f /dev/nvme0n1", "NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071", "mount /dev/nvme0n1p1 /mnt/", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk └─nvme0n1p1 259:2 0 250G 0 part /var/mnt 1", "taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help", "oc get csv -A | grep -i advanced-cluster-management", "open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded", "oc get csv -A | grep -i multicluster-engine", "multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded", "mkdir /root/.docker", "cp config.json /root/.docker/config.json 1", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- factory-precaching-cli download \\ 1 -r 4.16.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6", "Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f Summary: Release: 4.16.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83", "ls -l /mnt 1", "-rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.16.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s 7", "Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 Summary: Release: 4.16.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.16.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --generate-imageset 8", "Generated /mnt/imageset.yaml", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.16 minVersion: 4.16.0 1 maxVersion: 4.16.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.16' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.16 packages: - name: sriov-fec 11 channels: - name: 'stable'", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.16 packages: - name: sriov-fec channels: - name: 'stable'", "cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.", "update-ca-trust", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.16.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --skip-imageset 8", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.16.0 --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt --img quay.io/custom/repository --du-profile -s --skip-imageset", "apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-5g-lab\" namespace: \"example-5g-lab\" spec: baseDomain: \"example.domain.redhat.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"img4.9.10-x86-64-appsub\" 1 sshPublicKey: \"ssh-rsa ...\" clusters: - clusterName: \"sno-worker-0\" clusterImageSetNameRef: \"eko4-img4.11.5-x86-64-appsub\" 2 clusterLabels: group-du-sno: \"\" common-411: true sites : \"example-5g-lab\" vendor: \"OpenShift\" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: \"OVNKubernetes\" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-images.service\\nBindsTo=precache-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-images.service\" }, { \"name\": \"precache-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached images in discovery stage\\nAfter=var-mnt.mount\\nBefore=agent.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ai.sh\\n#TimeoutStopSec=30\\n\\n[Install]\\nWantedBy=multi-user.target default.target\\nWantedBy=agent.service\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ai.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200\" } }, { \"overwrite\": true, \"path\": \"/usr/local/bin/agent-fix-bz1964591\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true\" } } ] } }' nodes: - hostName: \"snonode.sno-worker-0.example.domain.redhat.com\" role: \"master\" bmcAddress: \"idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"worker0-bmh-secret\" bootMACAddress: \"e4:43:4b:bd:90:46\" bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '[\"--save-partlabel\", \"data\"]' ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-ocp-images.service\\nBindsTo=precache-ocp-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-ocp-images.service\" }, { \"name\": \"precache-ocp-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached OCP images into containers storage\\nAfter=var-mnt.mount\\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ocp.sh\\nTimeoutStopSec=60\\n\\n[Install]\\nWantedBy=multi-user.target\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ocp.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200\" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: \"AA:BB:CC:11:22:33\" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"ens1f0\" macAddress: \"AA:BB:CC:11:22:33\"", "OPTIONS: -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL --save-partlabel <lx> Save partitions with this label glob --save-partindex <id> Save partitions with this number or range --insecure-ignition Allow Ignition URL without HTTPS or hash", "Generating list of pre-cached artifacts error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying next host error=failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run \"oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME\" for more information. error: error rendering new refs: render reference \"eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11\": error resolving name : failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority", "cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.", "update-ca-trust", "podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.16.0 --acm-version 2.5.4 --mce-version 2.0.4 -f /mnt \\--img quay.io/custom/repository --du-profile -s --skip-imageset", "apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage spec: seedImage: <seed_image>", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle 1 seedImageRef: 2 version: <target_version> image: <seed_container_image> pullSecretRef: name: <seed_pull_secret> autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 3 extraManifests: 4 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 5 - name: oadp-cm-example namespace: openshift-adp", "apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet namespace: openshift-adp annotations: lca.openshift.io/apply-label: rbac.authorization.k8s.io/v1/clusterroles/klusterlet,apps/v1/deployments/open-cluster-management-agent/klusterlet 1 labels: velero.io/storage-location: default spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - clusterroles includedNamespaceScopedResources: - deployments", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: annotations: lca.openshift.io/target-ocp-version-manifest-count: \"5\" name: upgrade", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-containers-partitioned spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 partitions: - label: var-lib-containers startMiB: <start_of_partition> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var-lib-containers format: xfs mountOptions: - defaults - prjquota path: /var/lib/containers wipeFilesystem: true systemd: units: - contents: |- # Generated by Butane [Unit] Before=local-fs.target Requires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service After=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service [Mount] Where=/var/lib/containers What=/dev/disk/by-partlabel/var-lib-containers Type=xfs Options=defaults,prjquota [Install] RequiredBy=local-fs.target enabled: true name: var-lib-containers.mount", "variant: fcos version: 1.3.0 storage: disks: - device: /dev/disk/by-path/pci-<root_disk> 1 wipe_table: false partitions: - label: var-lib-containers start_mib: <start_of_partition> 2 size_mib: <partition_size> 3 filesystems: - path: /var/lib/containers device: /dev/disk/by-partlabel/var-lib-containers format: xfs wipe_filesystem: true with_mount_unit: true mount_options: - defaults - prjquota", "butane storage.bu", "{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}", "[...] spec: clusters: - nodes: - hostName: <name> ignitionConfigOverride: '{\"ignition\":{\"version\":\"3.2.0\"},\"storage\":{\"disks\":[{\"device\":\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\",\"partitions\":[{\"label\":\"var-lib-containers\",\"sizeMiB\":0,\"startMiB\":250000}],\"wipeTable\":false}],\"filesystems\":[{\"device\":\"/dev/disk/by-partlabel/var-lib-containers\",\"format\":\"xfs\",\"mountOptions\":[\"defaults\",\"prjquota\"],\"path\":\"/var/lib/containers\",\"wipeFilesystem\":true}]},\"systemd\":{\"units\":[{\"contents\":\"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\",\"enabled\":true,\"name\":\"var-lib-containers.mount\"}]}}' [...]", "oc get bmh -n my-sno-ns my-sno -ojson | jq '.metadata.annotations[\"bmac.agent-install.openshift.io/ignition-config-overrides\"]'", "\"{\\\"ignition\\\":{\\\"version\\\":\\\"3.2.0\\\"},\\\"storage\\\":{\\\"disks\\\":[{\\\"device\\\":\\\"/dev/disk/by-path/pci-0000:00:17.0-ata-1.0\\\",\\\"partitions\\\":[{\\\"label\\\":\\\"var-lib-containers\\\",\\\"sizeMiB\\\":0,\\\"startMiB\\\":250000}],\\\"wipeTable\\\":false}],\\\"filesystems\\\":[{\\\"device\\\":\\\"/dev/disk/by-partlabel/var-lib-containers\\\",\\\"format\\\":\\\"xfs\\\",\\\"mountOptions\\\":[\\\"defaults\\\",\\\"prjquota\\\"],\\\"path\\\":\\\"/var/lib/containers\\\",\\\"wipeFilesystem\\\":true}]},\\\"systemd\\\":{\\\"units\\\":[{\\\"contents\\\":\\\"# Generated by Butane\\\\n[Unit]\\\\nRequires=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\nAfter=systemd-fsck@dev-disk-by\\\\\\\\x2dpartlabel-var\\\\\\\\x2dlib\\\\\\\\x2dcontainers.service\\\\n\\\\n[Mount]\\\\nWhere=/var/lib/containers\\\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\\\nType=xfs\\\\nOptions=defaults,prjquota\\\\n\\\\n[Install]\\\\nRequiredBy=local-fs.target\\\",\\\"enabled\\\":true,\\\"name\\\":\\\"var-lib-containers.mount\\\"}]}}\"", "lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 446.6G 0 disk ├─sda1 8:1 0 1M 0 part ├─sda2 8:2 0 127M 0 part ├─sda3 8:3 0 384M 0 part /boot ├─sda4 8:4 0 243.6G 0 part /var │ /sysroot/ostree/deploy/rhcos/var │ /usr │ /etc │ / │ /sysroot └─sda5 8:5 0 202.5G 0 part /var/lib/containers", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 126G 84K 126G 1% /dev/shm tmpfs 51G 93M 51G 1% /run /dev/sda4 244G 5.2G 239G 3% /sysroot tmpfs 126G 4.0K 126G 1% /tmp /dev/sda5 203G 119G 85G 59% /var/lib/containers /dev/sda3 350M 110M 218M 34% /boot tmpfs 26G 0 26G 0% /run/user/1000", "apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management", "oc create -f lcao-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-lifecycle-agent namespace: openshift-lifecycle-agent spec: targetNamespaces: - openshift-lifecycle-agent", "oc create -f lcao-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-lifecycle-agent-subscription namespace: openshift-lifecycle-agent spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f lcao-subscription.yaml", "oc get csv -n openshift-lifecycle-agent", "NAME DISPLAY VERSION REPLACES PHASE lifecycle-agent.v4.16.0 Openshift Lifecycle Agent 4.16.0 Succeeded", "oc get deploy -n openshift-lifecycle-agent", "NAME READY UP-TO-DATE AVAILABLE AGE lifecycle-agent-controller-manager 1/1 1 1 14s", "apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management ran.openshift.io/ztp-deploy-wave: \"2\" labels: kubernetes.io/metadata.name: openshift-lifecycle-agent", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent-operatorgroup namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: targetNamespaces: - openshift-lifecycle-agent", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── LcaSubscriptionNS.yaml │ ├── LcaSubscriptionOperGroup.yaml │ ├── LcaSubscription.yaml", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-common-latest\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" du-profile: \"latest\" sourceFiles: - fileName: LcaSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: LcaSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: LcaSubscription.yaml policyName: \"subscriptions-policy\" [...]", "apiVersion: v1 kind: Namespace metadata: name: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" labels: kubernetes.io/metadata.name: openshift-adp", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: targetNamespaces: - openshift-adp", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" spec: channel: stable-1.4 name: redhat-oadp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: redhat-oadp-operator.openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"2\" status: components: refs: - kind: Subscription namespace: openshift-adp conditions: - type: CatalogSourcesUnhealthy status: \"False\" - kind: InstallPlan namespace: openshift-adp conditions: - type: Installed status: \"True\" - kind: ClusterServiceVersion namespace: openshift-adp conditions: - type: Succeeded status: \"True\" reason: InstallSucceeded", "├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── OadpSubscriptionNS.yaml │ ├── OadpSubscriptionOperGroup.yaml │ ├── OadpSubscription.yaml │ ├── OadpOperatorStatus.yaml", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-common-latest\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" du-profile: \"latest\" sourceFiles: - fileName: OadpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: OadpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: OadpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: OadpOperatorStatus.yaml policyName: \"subscriptions-policy\" [...]", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dataprotectionapplication namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" spec: configuration: restic: enable: false 1 velero: defaultPlugins: - aws - openshift resourceTimeout: 10m backupLocations: - velero: config: profile: \"default\" region: minio s3Url: USDurl insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: USDbucketName 2 prefix: USDprefixName 3 status: conditions: - reason: Complete status: \"True\" type: Reconciled", "apiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" type: Opaque", "apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: \"100\" status: phase: Available", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-cnf\" namespace: \"ztp-site\" spec: bindingRules: sites: \"example-cnf\" du-profile: \"latest\" mcp: \"master\" sourceFiles: - fileName: OadpSecret.yaml policyName: \"config-policy\" data: cloud: <your_credentials> 1 - fileName: DataProtectionApplication.yaml policyName: \"config-policy\" spec: backupLocations: - velero: config: region: minio s3Url: <your_S3_URL> 2 profile: \"default\" insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <your_bucket_name> 3 prefix: <cluster_name> 4 - fileName: OadpBackupStorageLocationStatus.yaml policyName: \"config-policy\"", "oc delete managedcluster sno-worker-example", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: #- example-seed-sno1.yaml - example-target-sno2.yaml - example-target-sno3.yaml", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: {}", "MY_USER=myuserid AUTHFILE=/tmp/my-auth.json podman login --authfile USD{AUTHFILE} -u USD{MY_USER} quay.io/USD{MY_USER}", "base64 -w 0 USD{AUTHFILE} ; echo", "apiVersion: v1 kind: Secret metadata: name: seedgen 1 namespace: openshift-lifecycle-agent type: Opaque data: seedAuth: <encoded_AUTHFILE> 2", "oc apply -f secretseedgenerator.yaml", "apiVersion: lca.openshift.io/v1 kind: SeedGenerator metadata: name: seedimage 1 spec: seedImage: <seed_container_image> 2", "oc apply -f seedgenerator.yaml", "oc get seedgenerator -o yaml", "status: conditions: - lastTransitionTime: \"2024-02-13T21:24:26Z\" message: Seed Generation completed observedGeneration: 1 reason: Completed status: \"False\" type: SeedGenInProgress - lastTransitionTime: \"2024-02-13T21:24:26Z\" message: Seed Generation completed observedGeneration: 1 reason: Completed status: \"True\" type: SeedGenCompleted 1 observedGeneration: 1", "apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: \"apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials\" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"1\" spec: backupName: acm-klusterlet", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"2\" 1 spec: backupName: lvmcluster", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: \"apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test\" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" 2 spec: backupName: backup-app-cluster-resources", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5", "oc create configmap oadp-cm-example --from-file=example-oadp-resources.yaml=<path_to_oadp_crs> -n openshift-adp", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"oadpContent\": [{\"name\": \"oadp-cm-example\", \"namespace\": \"openshift-adp\"}]}}' --type=merge -n openshift-lifecycle-agent", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: \"example-sriov-node-policy\" namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [ens1f0] nodeSelector: node-role.kubernetes.io/master: \"\" mtu: 1500 numVfs: 8 priority: 99 resourceName: example-sriov-node-policy --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"example-sriov-network\" namespace: openshift-sriov-network-operator spec: ipam: |- { } linkState: auto networkNamespace: sriov-namespace resourceName: example-sriov-node-policy spoofChk: \"on\" trust: \"off\"", "oc create configmap example-extra-manifests-cm --from-file=example-extra-manifests.yaml=<path_to_extramanifest> -n openshift-lifecycle-agent", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"extraManifests\": [{\"name\": \"example-extra-manifests-cm\", \"namespace\": \"openshift-lifecycle-agent\"}]}}' --type=merge -n openshift-lifecycle-agent", "apiVersion: operators.coreos.com/v1 kind: CatalogSource metadata: name: example-catalogsources namespace: openshift-marketplace spec: sourceType: grpc displayName: disconnected-redhat-operators image: quay.io/example-org/example-catalog:v1", "oc create configmap example-catalogsources-cm --from-file=example-catalogsources.yaml=<path_to_catalogsource_cr> -n openshift-lifecycle-agent", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"extraManifests\": [{\"name\": \"example-catalogsources-cm\", \"namespace\": \"openshift-lifecycle-agent\"}]}}' --type=merge -n openshift-lifecycle-agent", "├── source-crs/ │ ├── ibu/ │ │ ├── ImageBasedUpgrade.yaml │ │ ├── PlatformBackupRestore.yaml │ │ ├── PlatformBackupRestoreLvms.yaml │ │ ├── PlatformBackupRestoreWithIBGU.yaml ├── ├── kustomization.yaml", "apiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: \"apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-work:ibu-role,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials\" 1 labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"1\" spec: backupName: acm-klusterlet", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"2\" 1 spec: backupName: lvmcluster", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: \"apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test\" 1 name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" 2 spec: backupName: backup-app-cluster-resources", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> 1 includedClusterScopedResources: - persistentVolumes 2 - logicalvolumes.topolvm.io 3 - volumesnapshotcontents 4 --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"4\" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes 5", "apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: 1 - files: - source-crs/ibu/PlatformBackupRestoreWithIBGU.yaml #- source-crs/custom-crs/ApplicationClusterScopedBackupRestore.yaml #- source-crs/custom-crs/ApplicationApplicationBackupRestoreLso.yaml name: oadp-cm namespace: openshift-adp 2 generatorOptions: disableNameSuffixHash: true", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: example-sno spec: bindingRules: sites: \"example-sno\" du-profile: \"4.15\" mcp: \"master\" sourceFiles: - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-fh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" 1 spec: resourceName: du_fh vlan: 140 - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-fh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: deviceType: netdevice isRdma: false nicSelector: pfNames: [\"ens5f0\"] numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml policyName: \"config-policy\" metadata: name: \"sriov-nw-du-mh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: resourceName: du_mh vlan: 150 - fileName: SriovNetworkNodePolicy.yaml policyName: \"config-policy\" metadata: name: \"sriov-nnp-du-mh\" labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: [\"ens7f0\"] numVfs: 8 priority: 10 resourceName: du_mh - fileName: DefaultCatsrc.yaml 2 policyName: \"config-policy\" metadata: name: default-cat-source namespace: openshift-marketplace labels: lca.openshift.io/target-ocp-version: \"4.15\" spec: displayName: default-cat-source image: quay.io/example-org/example-catalog:v1", "oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent='65'", "oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/disk-usage-threshold-percent-", "oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep='Disabled'", "oc -n openshift-lifecycle-agent annotate ibu upgrade image-cleanup.lca.openshift.io/on-prep-", "skopeo inspect docker://<imagename> | jq -r '.Labels.\"com.openshift.lifecycle-agent.seed_cluster_info\" | fromjson | .release_registry'", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 1 image: <seed_container_image> 2 pullSecretRef: <seed_pull_secret> 3 autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 4 extraManifests: 5 - name: example-extra-manifests-cm namespace: openshift-lifecycle-agent - name: example-catalogsources-cm namespace: openshift-lifecycle-agent oadpContent: 6 - name: oadp-cm-example namespace: openshift-adp", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Prep\"}}' --type=merge -n openshift-lifecycle-agent", "[...] metadata: annotations: extra-manifest.lca.openshift.io/validation-warning: '...' [...]", "oc get ibu -o yaml", "conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 13 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 13 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep stage completed successfully observedGeneration: 13 reason: Completed status: \"True\" type: PrepCompleted observedGeneration: 13 validNextStages: - Idle - Upgrade", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Upgrade\"}}' --type=merge", "oc get ibu -o yaml", "status: conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 5 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 5 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed successfully observedGeneration: 5 reason: Completed status: \"True\" type: PrepCompleted - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: |- Waiting for system to stabilize: one or more health checks failed - one or more ClusterOperators not yet ready: authentication - one or more MachineConfigPools not yet ready: master - one or more ClusterServiceVersions not yet ready: sriov-fec.v2.8.0 observedGeneration: 1 reason: InProgress status: \"True\" type: UpgradeInProgress observedGeneration: 1 rollbackAvailabilityExpiration: \"2024-05-19T14:01:52Z\" validNextStages: - Rollback", "oc get ibu -o yaml", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Idle\"}}' --type=merge", "oc get ibu -o yaml", "status: conditions: - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: In progress observedGeneration: 5 reason: InProgress status: \"False\" type: Idle - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed observedGeneration: 5 reason: Completed status: \"False\" type: PrepInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Prep completed successfully observedGeneration: 5 reason: Completed status: \"True\" type: PrepCompleted - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Upgrade completed observedGeneration: 1 reason: Completed status: \"False\" type: UpgradeInProgress - lastTransitionTime: \"2024-01-01T09:00:00Z\" message: Upgrade completed observedGeneration: 1 reason: Completed status: \"True\" type: UpgradeCompleted observedGeneration: 1 rollbackAvailabilityExpiration: \"2024-01-01T09:00:00Z\" validNextStages: - Idle - Rollback", "oc get restores -n openshift-adp -o custom-columns=NAME:.metadata.name,Status:.status.phase,Reason:.status.failureReason", "NAME Status Reason acm-klusterlet Completed <none> 1 apache-app Completed <none> localvolume Completed <none>", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle seedImageRef: version: 4.15.2 image: <seed_container_image> autoRollbackOnFailure: {} initMonitorTimeoutSeconds: 1800 1 [...]", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Rollback\"}}' --type=merge", "oc patch imagebasedupgrades.lca.openshift.io upgrade -p='{\"spec\": {\"stage\": \"Idle\"}}' --type=merge -n openshift-lifecycle-agent", "oc adm must-gather --dest-dir=must-gather/tmp --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == \"manager\")].image}') --image=quay.io/konveyor/oadp-must-gather:latest \\ 1 --image=quay.io/openshift/origin-must-gather:latest 2", "message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: \"False\" type: Idle", "ostree admin status", "ostree admin undeploy <index_of_deployment>", "stateroot=\"<stateroot_to_delete>\"", "unshare -m /bin/sh -c \"mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}\"", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true", "oc describe pod <your_app_name>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume \"pvc-1234\" : rpc error: code = Unknown desc = VolumeID is not found", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io", "oc get pv,pvc,logicalvolumes.topolvm.io -A", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m", "oc get pv,pvc,logicalvolumes.topolvm.io -A", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s", "apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7", "plan: - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 60", "plan: - actions: [\"Prep\", \"Upgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: [\"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 500 timeout: 10", "plan: - actions: [\"Prep\"] rolloutStrategy: maxConcurrency: 200 timeout: 60 - actions: [\"Upgrade\"] rolloutStrategy: maxConcurrency: 200 timeout: 20 - actions: [\"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 500 timeout: 10", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\"] rolloutStrategy: maxConcurrency: 2 timeout: 2400", "oc apply -f <filename>.yaml", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Prep name: spoke1 - completedActions: - action: Prep name: spoke4 - failedActions: - action: Prep name: spoke6", "oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"AbortOnFailure\"], \"rolloutStrategy\": {\"maxConcurrency\": 5, \"timeout\": 10}}}]'", "oc get ibgu -o yaml", "oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"Upgrade\"], \"rolloutStrategy\": {\"maxConcurrency\": 2, \"timeout\": 30}}}]'", "oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"AbortOnFailure\"], \"rolloutStrategy\": {\"maxConcurrency\": 5, \"timeout\": 10}}}]'", "oc get ibgu -o yaml", "oc patch ibgu <filename> --type=json -p '[{\"op\": \"add\", \"path\": \"/spec/plan/-\", \"value\": {\"actions\": [\"FinalizeUpgrade\"], \"rolloutStrategy\": {\"maxConcurrency\": 10, \"timeout\": 3}}}]'", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Prep - action: AbortOnFailure failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - completedActions: - action: AbortOnFailure failedActions: - action: Prep name: spoke6", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: 1 - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: 2 image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: 3 - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: 4 - name: oadp-cm namespace: openshift-adp plan: 5 - actions: [\"Prep\", \"Upgrade\", \"FinalizeUpgrade\"] rolloutStrategy: maxConcurrency: 200 6 timeout: 2400 7", "oc apply -f <filename>.yaml", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Prep failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - failedActions: - action: Prep name: spoke6", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: [\"Abort\"] rolloutStrategy: maxConcurrency: 5 timeout: 10", "oc apply -f <filename>.yaml", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Prep currentActions: - action: Abort name: spoke4", "apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.17.0-rc.1 version: 4.17.0-rc.1 pullSecretRef: name: \"<seed_pull_secret>\" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: [\"Rollback\", \"FinalizeRollback\"] rolloutStrategy: maxConcurrency: 200 timeout: 2400", "oc apply -f <filename>.yaml", "oc get ibgu -o yaml", "status: clusters: - completedActions: - action: Rollback - action: FinalizeRollback name: spoke4", "oc adm must-gather --dest-dir=must-gather/tmp --image=USD(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == \"manager\")].image}') --image=quay.io/konveyor/oadp-must-gather:latest \\ 1 --image=quay.io/openshift/origin-must-gather:latest 2", "message: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: \"False\" type: Idle", "ostree admin status", "ostree admin undeploy <index_of_deployment>", "stateroot=\"<stateroot_to_delete>\"", "unshare -m /bin/sh -c \"mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/USD{stateroot}\"", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true", "oc describe pod <your_app_name>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume \"pvc-1234\" : rpc error: code = Unknown desc = VolumeID is not found", "apiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: 1 - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io", "oc get pv,pvc,logicalvolumes.topolvm.io -A", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m", "oc get pv,pvc,logicalvolumes.topolvm.io -A", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s", "apiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: \"3\" spec: backupName: sample-vote-app restorePVs: true 1 restoreStatus: 2 includedResources: - logicalvolumes", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details", "oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz", "oc adm upgrade", "Cluster version is 4.14.34 Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.14 (available channels: candidate-4.14, candidate-4.15, eus-4.14, eus-4.16, fast-4.14, fast-4.15, stable-4.14, stable-4.15) Recommended updates: VERSION IMAGE 4.14.37 quay.io/openshift-release-dev/ocp-release@sha256:14e6ba3975e6c73b659fa55af25084b20ab38a543772ca70e184b903db73092b 4.14.36 quay.io/openshift-release-dev/ocp-release@sha256:4bc4925e8028158e3f313aa83e59e181c94d88b4aa82a3b00202d6f354e8dfed 4.14.35 quay.io/openshift-release-dev/ocp-release@sha256:883088e3e6efa7443b0ac28cd7682c2fdbda889b576edad626769bf956ac0858", "oc get clusterversion -o=jsonpath='{.items[*].spec}' | jq", "{ \"channel\": \"stable-4.14\", \"clusterID\": \"01eb9a57-2bfb-4f50-9d37-dc04bd5bac75\" }", "oc adm upgrade channel eus-4.16", "oc get clusterversion -o=jsonpath='{.items[*].spec}' | jq", "{ \"channel\": \"eus-4.16\", \"clusterID\": \"01eb9a57-2bfb-4f50-9d37-dc04bd5bac75\" }", "oc adm upgrade channel fast-4.16", "oc adm upgrade", "Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.28 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6958394 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: fast-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.15, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:6618dd3c0f5 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:7a72abc3 4.16.12 quay.io/openshift-release-dev/ocp-release@sha256:1c8359fc2 4.16.11 quay.io/openshift-release-dev/ocp-release@sha256:bc9006febfe 4.16.10 quay.io/openshift-release-dev/ocp-release@sha256:dece7b61b1 4.15.36 quay.io/openshift-release-dev/ocp-release@sha256:c31a56d19 4.15.35 quay.io/openshift-release-dev/ocp-release@sha256:f21253 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:2dd69c5", "oc adm upgrade channel stable-4.15", "oc adm upgrade", "Cluster version is 4.14.34 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.27 and therefore OpenShift 4.15 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/6958394 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: stable-4.15 (available channels: candidate-4.14, candidate-4.15, eus-4.14, eus-4.15, fast-4.14, fast-4.15, stable-4.14, stable-4.15) Recommended updates: VERSION IMAGE 4.15.33 quay.io/openshift-release-dev/ocp-release@sha256:7142dd4b560 4.15.32 quay.io/openshift-release-dev/ocp-release@sha256:cda8ea5b13dc9 4.15.31 quay.io/openshift-release-dev/ocp-release@sha256:07cf61e67d3eeee 4.15.30 quay.io/openshift-release-dev/ocp-release@sha256:6618dd3c0f5 4.15.29 quay.io/openshift-release-dev/ocp-release@sha256:7a72abc3 4.15.28 quay.io/openshift-release-dev/ocp-release@sha256:1c8359fc2 4.15.27 quay.io/openshift-release-dev/ocp-release@sha256:bc9006febfe 4.15.26 quay.io/openshift-release-dev/ocp-release@sha256:dece7b61b1 4.14.38 quay.io/openshift-release-dev/ocp-release@sha256:c93914c62d7 4.14.37 quay.io/openshift-release-dev/ocp-release@sha256:c31a56d19 4.14.36 quay.io/openshift-release-dev/ocp-release@sha256:f21253 4.14.35 quay.io/openshift-release-dev/ocp-release@sha256:2dd69c5", "oc get csv -A", "NAMESPACE NAME DISPLAY VERSION REPLACES PHASE gitlab-operator-kubernetes.v0.17.2 GitLab 0.17.2 gitlab-operator-kubernetes.v0.17.1 Succeeded openshift-operator-lifecycle-manager packageserver Package Server 0.19.0 Succeeded", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bere83 True False False 3 3 3 0 25d worker rendered-worker-245c4f True False False 2 2 2 0 25d", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 39d v1.27.15+6147456 worker-0 Ready worker 39d v1.27.15+6147456 worker-1 Ready worker 39d v1.27.15+6147456", "oc label node worker-0 node-role.kubernetes.io/mcp-1=", "oc label node worker-1 node-role.kubernetes.io/mcp-2=", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 39d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 39d v1.27.15+6147456 worker-0 Ready mcp-1,worker 39d v1.27.15+6147456 worker-1 Ready mcp-2,worker 39d v1.27.15+6147456", "--- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-2] } nodeSelector: matchLabels: node-role.kubernetes.io/mcp-2: \"\" --- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-1 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-1] } nodeSelector: matchLabels: node-role.kubernetes.io/mcp-1: \"\"", "oc apply -f mcps.yaml", "machineconfigpool.machineconfiguration.openshift.io/mcp-2 created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-be3e83 True False False 3 3 3 0 25d mcp-1 rendered-mcp-1-2f4c4f False True True 1 0 0 0 10s mcp-2 rendered-mcp-2-2r4s1f False True True 1 0 0 0 10s worker rendered-worker-23fc4f False True True 0 0 0 2 25d", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-be3e83 True False False 3 3 3 0 25d mcp-1 rendered-mcp-1-2f4c4f True False False 1 1 1 0 7m33s mcp-2 rendered-mcp-2-2r4s1f True False False 1 1 1 0 51s worker rendered-worker-23fc4f True False False 0 0 0 0 25d", "oc get pods -A | grep -E -vi 'complete|running'", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 32d v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 32d v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 32d v1.27.15+6147456 worker-0 Ready mcp-1,worker 32d v1.27.15+6147456 worker-1 Ready mcp-2,worker 32d v1.27.15+6147456", "oc get bmh -n openshift-machine-api", "NAME STATE CONSUMER ONLINE ERROR AGE ctrl-plane-0 unmanaged cnf-58879-master-0 true 33d ctrl-plane-1 unmanaged cnf-58879-master-1 true 33d ctrl-plane-2 unmanaged cnf-58879-master-2 true 33d worker-0 unmanaged cnf-58879-worker-0-45879 true 33d worker-1 progressing cnf-58879-worker-0-dszsh false 1d 1", "oc get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 17h baremetal 4.14.34 True False False 32d service-ca 4.14.34 True False False 32d storage 4.14.34 True False False 32d", "oc patch mcp/mcp-1 --type merge --patch '{\"spec\":{\"paused\":true}}'", "oc patch mcp/mcp-2 --type merge --patch '{\"spec\":{\"paused\":true}}'", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 true mcp-2 true", "oc debug --as-root node/<node_name>", "sh-4.4# chroot /host", "export HTTP_PROXY=http://<your_proxy.example.com>:8080", "export HTTPS_PROXY=https://<your_proxy.example.com>:8080", "export NO_PROXY=<example.com>", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem", "oc apply -f etcd-backup-pvc.yaml", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s", "apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1", "oc apply -f etcd-single-backup.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate", "oc apply -f etcd-backup-local-storage.yaml", "apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1", "oc apply -f etcd-backup-pvc.yaml", "apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1", "oc apply -f etcd-single-backup.yaml", "oc get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d22h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d22h config-operator 4.14.34 True False False 4d22h console 4.14.34 True False False 4d22h service-ca 4.14.34 True False False 4d22h storage 4.14.34 True False False 4d22h", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d22h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d22h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d22h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d22h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d22h v1.27.15+6147456", "oc get po -A | grep -E -iv 'running|complete'", "oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-<update_version_from>-kube-<kube_api_version>-api-removals-in-<update_version_to>\":\"true\"}}' --type=merge", "oc get configmap admin-acks -n openshift-config -o json | jq .data", "{ \"ack-4.14-kube-1.28-api-removals-in-4.15\": \"true\", \"ack-4.15-kube-1.29-api-removals-in-4.16\": \"true\" }", "oc adm upgrade --to=4.15.33", "Requested update to 4.15.33 1", "watch \"oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'\"", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.34 True True 4m6s Working towards 4.15.33: 111 of 873 done (12% complete), waiting on kube-apiserver NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d23h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d23h console 4.14.34 True False False 4d22h storage 4.14.34 True False False 4d23h config-operator 4.15.33 True False False 4d23h etcd 4.15.33 True False False 4d23h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d23h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d23h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d23h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-marketplace redhat-marketplace-rf86t 0/1 ContainerCreating 0 0s", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True False 28m Cluster version is 4.15.33 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d baremetal 4.15.33 True False False 5d cloud-controller-manager 4.15.33 True False False 5d1h cloud-credential 4.15.33 True False False 5d1h cluster-autoscaler 4.15.33 True False False 5d config-operator 4.15.33 True False False 5d console 4.15.33 True False False 5d service-ca 4.15.33 True False False 5d storage 4.15.33 True False False 5d NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d v1.28.13+2ca1a23 worker-1 Ready mcp-2,worker 5d v1.28.13+2ca1a23", "oc get installplan -A | grep -E 'APPROVED|false'", "NAMESPACE NAME CSV APPROVAL APPROVED metallb-system install-nwjnh metallb-operator.v4.16.0-202409202304 Manual false openshift-nmstate install-5r7wr kubernetes-nmstate-operator.4.16.0-202409251605 Manual false", "oc patch installplan -n metallb-system install-nwjnh --type merge --patch '{\"spec\":{\"approved\":true}}'", "installplan.operators.coreos.com/install-nwjnh patched", "oc get all -n metallb-system", "NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 0/1 ContainerCreating 0 4s pod/metallb-operator-controller-manager-77895bdb46-bqjdx 1/1 Running 0 4m1s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 0/1 ContainerCreating 0 4s pod/metallb-operator-webhook-server-d76f9c6c8-57r4w 1/1 Running 0 4m1s NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 0 4s replicaset.apps/metallb-operator-controller-manager-77895bdb46 1 1 1 4m1s replicaset.apps/metallb-operator-controller-manager-99b76f88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 0 4s replicaset.apps/metallb-operator-webhook-server-6f7dbfdb88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 1 1 1 4m1s", "NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 1/1 Running 0 25s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 1/1 Running 0 25s NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 1 25s replicaset.apps/metallb-operator-controller-manager-77895bdb46 0 0 0 4m22s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 1 25s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 0 0 0 4m22s", "oc get installplan -A | grep -E 'APPROVED|false'", "oc adm upgrade", "Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.29 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/7031404 for details and instructions. Upstream is unset, so the cluster will use an appropriate default. Channel: eus-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:0521a0f1acd2d1b77f76259cb9bae9c743c60c37d9903806a3372c1414253658 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:6078cb4ae197b5b0c526910363b8aff540343bfac62ecb1ead9e068d541da27b 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:f2e0c593f6ed81250c11d0bac94dbaf63656223477b7e8693a652f933056af6e", "oc adm upgrade --include-not-recommended", "Cluster version is 4.15.33 Upgradeable=False Reason: AdminAckRequired Message: Kubernetes 1.29 and therefore OpenShift 4.16 remove several APIs which require admin consideration. Please see the knowledge article https://access.redhat.com/articles/7031404 for details and instructions. Upstream is unset, so the cluster will use an appropriate default.Channel: eus-4.16 (available channels: candidate-4.15, candidate-4.16, eus-4.16, fast-4.15, fast-4.16, stable-4.15, stable-4.16) Recommended updates: VERSION IMAGE 4.16.14 quay.io/openshift-release-dev/ocp-release@sha256:0521a0f1acd2d1b77f76259cb9bae9c743c60c37d9903806a3372c1414253658 4.16.13 quay.io/openshift-release-dev/ocp-release@sha256:6078cb4ae197b5b0c526910363b8aff540343bfac62ecb1ead9e068d541da27b 4.15.34 quay.io/openshift-release-dev/ocp-release@sha256:f2e0c593f6ed81250c11d0bac94dbaf63656223477b7e8693a652f933056af6e Supported but not recommended updates: Version: 4.16.15 Image: quay.io/openshift-release-dev/ocp-release@sha256:671bc35e Recommended: Unknown Reason: EvaluationFailed Message: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461", "oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-4.15-kube-1.29-api-removals-in-4.16\":\"true\"}}' --type=merge", "configmap/admin-acks patched", "oc adm upgrade --to=4.16.14", "Requested update to 4.16.14", "oc adm upgrade --to=4.16.15", "error: the update 4.16.15 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=Unknown risk and to proceed with update use --allow-not-recommended. Reason: EvaluationFailed Message: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461", "oc adm upgrade --to=4.16.15 --allow-not-recommended", "warning: with --allow-not-recommended you have accepted the risks with 4.14.11 and bypassed Recommended=Unknown EvaluationFailed: Exposure to AzureRegistryImagePreservation is unknown due to an evaluation failure: invalid PromQL result length must be one, but is 0 In Azure clusters, the in-cluster image registry may fail to preserve images on update. https://issues.redhat.com/browse/IR-461 Requested update to 4.16.15", "watch \"oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.15; oc get co | grep 4.16; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'\"", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True True 10m Working towards 4.16.14: 132 of 903 done (14% complete), waiting on kube-controller-manager, kube-scheduler NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d3h baremetal 4.15.33 True False False 5d4h cloud-controller-manager 4.15.33 True False False 5d4h cloud-credential 4.15.33 True False False 5d4h cluster-autoscaler 4.15.33 True False False 5d4h console 4.15.33 True False False 5d3h config-operator 4.16.14 True False False 5d4h etcd 4.16.14 True False False 5d4h kube-apiserver 4.16.14 True True False 5d4h NodeInstallerProgressing: 1 node is at revision 15; 2 nodes are at revision 17 NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d4h v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d4h v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d4h v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d4h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d4h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-kube-apiserver kube-apiserver-ctrl-plane-0 0/5 Pending 0 <invalid>", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 123m Cluster version is 4.16.14 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d6h baremetal 4.16.14 True False False 5d7h cloud-controller-manager 4.16.14 True False False 5d7h cloud-credential 4.16.14 True False False 5d7h cluster-autoscaler 4.16.14 True False False 5d7h config-operator 4.16.14 True False False 5d7h console 4.16.14 True False False 5d6h # operator-lifecycle-manager-packageserver 4.16.14 True False False 5d7h service-ca 4.16.14 True False False 5d7h storage 4.16.14 True False False 5d7h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d7h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d7h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d7h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d7h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d7h v1.27.15+6147456", "watch \"oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'\"", "oc get installplan -A | grep -E 'APPROVED|false'", "oc patch installplan -n metallb-system install-nwjnh --type merge --patch '{\"spec\":{\"approved\":true}}'", "oc get all -n metallb-system", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 true mcp-2 true", "oc patch mcp/mcp-1 --type merge --patch '{\"spec\":{\"paused\":false}}'", "machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 false mcp-2 true", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 false mcp-2 false", "oc get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h", "oc get po -A | grep -E -iv 'complete|running'", "oc -n openshift-config patch cm admin-acks --patch '{\"data\":{\"ack-<update_version_from>-kube-<kube_api_version>-api-removals-in-<update_version_to>\":\"true\"}}' --type=merge", "oc get configmap admin-acks -n openshift-config -o json | jq .data", "{ \"ack-4.14-kube-1.28-api-removals-in-4.15\": \"true\", \"ack-4.15-kube-1.29-api-removals-in-4.16\": \"true\" }", "oc adm upgrade --to=4.15.33", "Requested update to 4.15.33 1", "watch \"oc get clusterversion; echo; oc get co | head -1; oc get co | grep 4.14; oc get co | grep 4.15; echo; oc get no; echo; oc get po -A | grep -E -iv 'running|complete'\"", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.34 True True 4m6s Working towards 4.15.33: 111 of 873 done (12% complete), waiting on kube-apiserver NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.34 True False False 4d22h baremetal 4.14.34 True False False 4d23h cloud-controller-manager 4.14.34 True False False 4d23h cloud-credential 4.14.34 True False False 4d23h cluster-autoscaler 4.14.34 True False False 4d23h console 4.14.34 True False False 4d22h storage 4.14.34 True False False 4d23h config-operator 4.15.33 True False False 4d23h etcd 4.15.33 True False False 4d23h NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-1 Ready control-plane,master 4d23h v1.27.15+6147456 ctrl-plane-2 Ready control-plane,master 4d23h v1.27.15+6147456 worker-0 Ready mcp-1,worker 4d23h v1.27.15+6147456 worker-1 Ready mcp-2,worker 4d23h v1.27.15+6147456 NAMESPACE NAME READY STATUS RESTARTS AGE openshift-marketplace redhat-marketplace-rf86t 0/1 ContainerCreating 0 0s", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.15.33 True False 28m Cluster version is 4.15.33 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.15.33 True False False 5d baremetal 4.15.33 True False False 5d cloud-controller-manager 4.15.33 True False False 5d1h cloud-credential 4.15.33 True False False 5d1h cluster-autoscaler 4.15.33 True False False 5d config-operator 4.15.33 True False False 5d console 4.15.33 True False False 5d service-ca 4.15.33 True False False 5d storage 4.15.33 True False False 5d NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-1 Ready control-plane,master 5d v1.28.13+2ca1a23 ctrl-plane-2 Ready control-plane,master 5d v1.28.13+2ca1a23 worker-0 Ready mcp-1,worker 5d v1.28.13+2ca1a23 worker-1 Ready mcp-2,worker 5d v1.28.13+2ca1a23", "oc get installplan -A | grep -E 'APPROVED|false'", "NAMESPACE NAME CSV APPROVAL APPROVED metallb-system install-nwjnh metallb-operator.v4.16.0-202409202304 Manual false openshift-nmstate install-5r7wr kubernetes-nmstate-operator.4.16.0-202409251605 Manual false", "oc patch installplan -n metallb-system install-nwjnh --type merge --patch '{\"spec\":{\"approved\":true}}'", "installplan.operators.coreos.com/install-nwjnh patched", "oc get all -n metallb-system", "NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 0/1 ContainerCreating 0 4s pod/metallb-operator-controller-manager-77895bdb46-bqjdx 1/1 Running 0 4m1s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 0/1 ContainerCreating 0 4s pod/metallb-operator-webhook-server-d76f9c6c8-57r4w 1/1 Running 0 4m1s NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 0 4s replicaset.apps/metallb-operator-controller-manager-77895bdb46 1 1 1 4m1s replicaset.apps/metallb-operator-controller-manager-99b76f88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 0 4s replicaset.apps/metallb-operator-webhook-server-6f7dbfdb88 0 0 0 4m40s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 1 1 1 4m1s", "NAME READY STATUS RESTARTS AGE pod/metallb-operator-controller-manager-69b5f884c-8bp22 1/1 Running 0 25s pod/metallb-operator-webhook-server-5d9b968896-vnbhk 1/1 Running 0 25s NAME DESIRED CURRENT READY AGE replicaset.apps/metallb-operator-controller-manager-69b5f884c 1 1 1 25s replicaset.apps/metallb-operator-controller-manager-77895bdb46 0 0 0 4m22s replicaset.apps/metallb-operator-webhook-server-5d9b968896 1 1 1 25s replicaset.apps/metallb-operator-webhook-server-d76f9c6c8 0 0 0 4m22s", "oc get installplan -A | grep -E 'APPROVED|false'", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 true mcp-2 true", "oc patch mcp/mcp-1 --type merge --patch '{\"spec\":{\"paused\":false}}'", "machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 false mcp-2 true", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 false mcp-2 false", "oc get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h", "oc get po -A | grep -E -iv 'complete|running'", "oc adm upgrade --to=4.15.33", "Requested update to 4.15.33 1", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-c9a52144456dbff9c9af9c5a37d1b614 True False False 3 3 3 0 36d mcp-1 rendered-mcp-1-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h mcp-2 rendered-mcp-2-07fe50b9ad51fae43ed212e84e1dcc8e False False False 1 0 0 0 47h worker rendered-worker-f1ab7b9a768e1b0ac9290a18817f60f0 True False False 0 0 0 0 36d", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.27.15+6147456 worker-1 Ready mcp-2,worker 5d8h v1.27.15+6147456", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 true mcp-2 true", "oc patch mcp/mcp-1 --type merge --patch '{\"spec\":{\"paused\":false}}'", "machineconfigpool.machineconfiguration.openshift.io/mcp-1 patched", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 false mcp-2 true", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d8h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d8h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d8h v1.29.8+f10c92d worker-1 NotReady,SchedulingDisabled mcp-2,worker 5d8h v1.27.15+6147456", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.16.14 True False 4h38m Cluster version is 4.16.14", "oc get nodes", "NAME STATUS ROLES AGE VERSION ctrl-plane-0 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-1 Ready control-plane,master 5d9h v1.29.8+f10c92d ctrl-plane-2 Ready control-plane,master 5d9h v1.29.8+f10c92d worker-0 Ready mcp-1,worker 5d9h v1.29.8+f10c92d worker-1 Ready mcp-2,worker 5d9h v1.29.8+f10c92d", "oc get mcp -o json | jq -r '[\"MCP\",\"Paused\"], [\"---\",\"------\"], (.items[] | [(.metadata.name), (.spec.paused)]) | @tsv' | grep -v worker", "MCP Paused --- ------ master false mcp-1 false mcp-2 false", "oc get co", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.16.14 True False False 5d9h baremetal 4.16.14 True False False 5d9h cloud-controller-manager 4.16.14 True False False 5d10h cloud-credential 4.16.14 True False False 5d10h cluster-autoscaler 4.16.14 True False False 5d9h config-operator 4.16.14 True False False 5d9h console 4.16.14 True False False 5d9h control-plane-machine-set 4.16.14 True False False 5d9h csi-snapshot-controller 4.16.14 True False False 5d9h dns 4.16.14 True False False 5d9h etcd 4.16.14 True False False 5d9h image-registry 4.16.14 True False False 85m ingress 4.16.14 True False False 5d9h insights 4.16.14 True False False 5d9h kube-apiserver 4.16.14 True False False 5d9h kube-controller-manager 4.16.14 True False False 5d9h kube-scheduler 4.16.14 True False False 5d9h kube-storage-version-migrator 4.16.14 True False False 4h48m machine-api 4.16.14 True False False 5d9h machine-approver 4.16.14 True False False 5d9h machine-config 4.16.14 True False False 5d9h marketplace 4.16.14 True False False 5d9h monitoring 4.16.14 True False False 5d9h network 4.16.14 True False False 5d9h node-tuning 4.16.14 True False False 5d7h openshift-apiserver 4.16.14 True False False 5d9h openshift-controller-manager 4.16.14 True False False 5d9h openshift-samples 4.16.14 True False False 5h24m operator-lifecycle-manager 4.16.14 True False False 5d9h operator-lifecycle-manager-catalog 4.16.14 True False False 5d9h operator-lifecycle-manager-packageserver 4.16.14 True False False 5d9h service-ca 4.16.14 True False False 5d9h storage 4.16.14 True False False 5d9h", "oc get po -A | grep -E -iv 'complete|running'", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "oc project <project_name>", "oc get clusterversion,clusteroperator,node", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.16.11 True False 62d Cluster version is 4.16.11 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.16.11 True False False 62d clusteroperator.config.openshift.io/baremetal 4.16.11 True False False 62d clusteroperator.config.openshift.io/cloud-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/cloud-credential 4.16.11 True False False 62d clusteroperator.config.openshift.io/cluster-autoscaler 4.16.11 True False False 62d clusteroperator.config.openshift.io/config-operator 4.16.11 True False False 62d clusteroperator.config.openshift.io/console 4.16.11 True False False 62d clusteroperator.config.openshift.io/control-plane-machine-set 4.16.11 True False False 62d clusteroperator.config.openshift.io/csi-snapshot-controller 4.16.11 True False False 62d clusteroperator.config.openshift.io/dns 4.16.11 True False False 62d clusteroperator.config.openshift.io/etcd 4.16.11 True False False 62d clusteroperator.config.openshift.io/image-registry 4.16.11 True False False 55d clusteroperator.config.openshift.io/ingress 4.16.11 True False False 62d clusteroperator.config.openshift.io/insights 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-apiserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-scheduler 4.16.11 True False False 62d clusteroperator.config.openshift.io/kube-storage-version-migrator 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-api 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-approver 4.16.11 True False False 62d clusteroperator.config.openshift.io/machine-config 4.16.11 True False False 62d clusteroperator.config.openshift.io/marketplace 4.16.11 True False False 62d clusteroperator.config.openshift.io/monitoring 4.16.11 True False False 62d clusteroperator.config.openshift.io/network 4.16.11 True False False 62d clusteroperator.config.openshift.io/node-tuning 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-apiserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-controller-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/openshift-samples 4.16.11 True False False 35d clusteroperator.config.openshift.io/operator-lifecycle-manager 4.16.11 True False False 62d clusteroperator.config.openshift.io/operator-lifecycle-manager-catalog 4.16.11 True False False 62d clusteroperator.config.openshift.io/operator-lifecycle-manager-packageserver 4.16.11 True False False 62d clusteroperator.config.openshift.io/service-ca 4.16.11 True False False 62d clusteroperator.config.openshift.io/storage 4.16.11 True False False 62d NAME STATUS ROLES AGE VERSION node/ctrl-plane-0 Ready control-plane,master,worker 62d v1.29.7 node/ctrl-plane-1 Ready control-plane,master,worker 62d v1.29.7 node/ctrl-plane-2 Ready control-plane,master,worker 62d v1.29.7", "oc get pod", "NAME READY STATUS RESTARTS AGE busybox-1 1/1 Running 168 (34m ago) 7d busybox-2 1/1 Running 119 (9m20s ago) 4d23h busybox-3 1/1 Running 168 (43m ago) 7d busybox-4 1/1 Running 168 (43m ago) 7d", "oc logs -n <namespace> busybox-1", "oc describe pod -n <namespace> busybox-1", "Name: busybox-1 Namespace: busy Priority: 0 Service Account: default Node: worker-3/192.168.0.0 Start Time: Mon, 27 Nov 2023 14:41:25 -0500 Labels: app=busybox pod-template-hash=<hash> Annotations: k8s.ovn.org/pod-networks: ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 41m (x170 over 7d1h) kubelet Container image \"quay.io/quay/busybox:latest\" already present on machine Normal Created 41m (x170 over 7d1h) kubelet Created container busybox Normal Started 41m (x170 over 7d1h) kubelet Started container busybox", "oc get events -n <namespace> --sort-by=\".metadata.creationTimestamp\" 1", "oc get events -A --sort-by=\".metadata.creationTimestamp\" 1", "oc get events -A | grep -Ei \"warning|error\"", "NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE openshift 59s Warning FailedMount pod/openshift-1 MountVolume.SetUp failed for volume \"v4-0-config-user-idp-0-file-data\" : references non-existent secret key: test", "oc delete events -n <namespace> --all", "oc rsh -n <namespace> busybox-1", "oc get pod", "NAME READY STATUS RESTARTS AGE busybox-1 1/1 Running 168 (34m ago) 7d busybox-2 1/1 Running 119 (9m20s ago) 4d23h busybox-3 1/1 Running 168 (43m ago) 7d busybox-4 1/1 Running 168 (43m ago) 7d", "oc debug -n <namespace> busybox-1", "Starting pod/busybox-1-debug, command was: sleep 3600 Pod IP: 10.133.2.11", "oc exec -it <pod> -- <command>", "oc get co", "oc get po -A | grep -Eiv 'complete|running'", "oc get events -n openshift-authentication --sort-by='.metadata.creationTimestamp'", "oc get pod -n openshift-authentication", "oc logs -n openshift-authentication <pod_name>", "openssl x509 -enddate -noout -in <cert_file_name>.pem", "for each in USD(oc get secret -n openshift-etcd | grep \"kubernetes.io/tls\" | grep -e \"etcd-peer\\|etcd-serving\" | awk '{print USD1}'); do oc get secret USDeach -n openshift-etcd -o jsonpath=\"{.data.tls\\.crt}\" | base64 -d | openssl x509 -noout -enddate; done", "oc patch mcp/<mcp_name> --type merge --patch '{\"spec\":{\"paused\":true}}'", "oc patch mcp/<mcp_name> --type merge --patch '{\"spec\":{\"paused\":false}}'", "oc adm node-logs <node_name> -u crio", "oc debug node/<node_name>", "chroot /host", "You are now logged in as root on the node", "ssh core@<node_name>", "oc adm cordon <node_name>", "oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -qsk http://localhost:9090/api/v1/metadata | jq '.data", "oc get routes -n openshift-console console -o jsonpath='{.status.ingress[0].host}'", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h", "oc apply -f monitoringConfigMap.yaml", "apiVersion: v1 kind: Namespace metadata: name: open-cluster-management-observability", "oc apply -f monitoringNamespace.yaml", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: multi-cloud-observability namespace: open-cluster-management-observability spec: storageClassName: openshift-storage.noobaa.io generateBucketName: acm-multi", "oc apply -f monitoringObjectBucketClaim.yaml", "apiVersion: v1 kind: Secret metadata: name: multiclusterhub-operator-pull-secret namespace: open-cluster-management-observability stringData: .dockerconfigjson: 'PULL_SECRET'", "oc apply -f monitoringSecret.yaml", "NOOBAA_ACCESS_KEY=USD(oc get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d')", "NOOBAA_SECRET_KEY=USD(oc get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d')", "OBJECT_BUCKET=USD(oc get objectbucketclaim -n open-cluster-management-observability multi-cloud-observability -o json | jq -r .spec.bucketName)", "apiVersion: v1 kind: Secret metadata: name: thanos-object-storage namespace: open-cluster-management-observability type: Opaque stringData: thanos.yaml: | type: s3 config: bucket: USD{OBJECT_BUCKET} endpoint: s3.openshift-storage.svc insecure: true access_key: USD{NOOBAA_ACCESS_KEY} secret_key: USD{NOOBAA_SECRET_KEY}", "oc apply -f monitoringBucketSecret.yaml", "apiVersion: observability.open-cluster-management.io/v1beta2 kind: MultiClusterObservability metadata: name: observability spec: advanced: retentionConfig: blockDuration: 2h deleteDelay: 48h retentionInLocal: 24h retentionResolutionRaw: 3d enableDownsampling: false observabilityAddonSpec: enableMetrics: true interval: 300 storageConfig: alertmanagerStorageSize: 10Gi compactStorageSize: 100Gi metricObjectStorage: key: thanos.yaml name: thanos-object-storage receiveStorageSize: 25Gi ruleStorageSize: 10Gi storeStorageSize: 25Gi", "oc apply -f monitoringMultiClusterObservability.yaml", "oc get routes,pods -n open-cluster-management-observability", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/alertmanager alertmanager-open-cluster-management-observability.cloud.example.com /api/v2 alertmanager oauth-proxy reencrypt/Redirect None route.route.openshift.io/grafana grafana-open-cluster-management-observability.cloud.example.com grafana oauth-proxy reencrypt/Redirect None 1 route.route.openshift.io/observatorium-api observatorium-api-open-cluster-management-observability.cloud.example.com observability-observatorium-api public passthrough/None None route.route.openshift.io/rbac-query-proxy rbac-query-proxy-open-cluster-management-observability.cloud.example.com rbac-query-proxy https reencrypt/Redirect None NAME READY STATUS RESTARTS AGE pod/observability-alertmanager-0 3/3 Running 0 1d pod/observability-alertmanager-1 3/3 Running 0 1d pod/observability-alertmanager-2 3/3 Running 0 1d pod/observability-grafana-685b47bb47-dq4cw 3/3 Running 0 1d <...snip...> pod/observability-thanos-store-shard-0-0 1/1 Running 0 1d pod/observability-thanos-store-shard-1-0 1/1 Running 0 1d pod/observability-thanos-store-shard-2-0 1/1 Running 0 1d", "oc get cm -n openshift-monitoring prometheus-k8s-rulefiles-0 -o yaml", "- alert: etcdHighFsyncDurations annotations: description: 'etcd cluster \"{{ USDlabels.job }}\": 99th percentile fsync durations are {{ USDvalue }}s on etcd instance {{ USDlabels.instance }}.' runbook_url: https://github.com/openshift/runbooks/blob/master/alerts/cluster-etcd-operator/etcdHighFsyncDurations.md summary: etcd cluster 99th percentile fsync durations are too high. expr: | histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=~\".*etcd.*\"}[5m])) > 1 for: 10m labels: severity: critical", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1", "oc apply -f monitoringConfigMap.yaml", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: ui name: myapp namespace: myns spec: endpoints: 1 - interval: 30s port: ui-http scheme: http path: /healthz 2 selector: matchLabels: app: ui", "oc apply -f monitoringServiceMonitor.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1", "oc apply -f monitoringConfigMap.yaml", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: myapp-alert namespace: myns spec: groups: - name: example rules: - alert: InternalErrorsAlert expr: flask_http_request_total{status=\"500\"} > 0", "oc apply -f monitoringAlertRule.yaml", "oc adm policy add-cluster-role-to-user cluster-admin <emergency_user>", "oc whoami", "oc delete secrets kubeadmin -n kube-system", "oc debug node/<worker_node_name>", "chroot /host", "ssh core@<worker_node_name>", "sudo -i", "oc describe scc restricted-v2", "Name: restricted-v2 Priority: <none> Access: Users: <none> Groups: <none> Settings: Allow Privileged: false Allow Privilege Escalation: false Default Add Capabilities: <none> Required Drop Capabilities: ALL Allowed Capabilities: NET_BIND_SERVICE Allowed Seccomp Profiles: runtime/default Allowed Volume Types: configMap,downwardAPI,emptyDir,ephemeral,persistentVolumeClaim,projected,secret Allowed Flexvolumes: <all> Allowed Unsafe Sysctls: <none> Forbidden Sysctls: <none> Allow Host Network: false Allow Host Ports: false Allow Host PID: false Allow Host IPC: false Read Only Root Filesystem: false Run As User Strategy: MustRunAsRange UID: <none> UID Range Min: <none> UID Range Max: <none> SELinux Context Strategy: MustRunAs User: <none> Role: <none> Type: <none> Level: <none> FSGroup Strategy: MustRunAs Ranges: <none> Supplemental Groups Strategy: RunAsAny Ranges: <none>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/edge_computing/index
Chapter 8. Deployments
Chapter 8. Deployments 8.1. Understanding deployments The Deployment and DeploymentConfig API objects in OpenShift Container Platform provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects: A Deployment or DeploymentConfig object, either of which describes the desired state of a particular component of the application as a pod template. Deployment objects involve one or more replica sets , which contain a point-in-time record of the state of a deployment as a pod template. Similarly, DeploymentConfig objects involve one or more replication controllers , which preceded replica sets. One or more pods, which represent an instance of a particular version of an application. Use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. 8.1.1. Building blocks of a deployment Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController , respectively, as their building blocks. Users do not have to manipulate replica sets, replication controllers, or pods owned by Deployment or DeploymentConfig objects. The deployment systems ensure changes are propagated appropriately. Tip If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy. The following sections provide further details on these objects. 8.1.1.1. Replica sets A ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. Note Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create. The following is an example ReplicaSet definition: apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. 2 Equality-based selector to specify resources with labels that match the selector. 3 Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend . 8.1.1.2. Replication controllers Similar to a replica set, a replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller instantiates more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements. A replication controller configuration consists of: The number of replicas desired, which can be adjusted at run time. A Pod definition to use when creating a replicated pod. A selector for identifying managed pods. A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the Pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed. The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler. Note Use a DeploymentConfig to create a replication controller instead of creating replication controllers directly. If you require custom orchestration or do not require updates, use replica sets instead of replication controllers. The following is an example definition of a replication controller: apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod the controller creates. 4 Labels on the pod should include those from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 8.1.2. Deployments Kubernetes provides a first-class, native API object type in OpenShift Container Platform called Deployment . Deployment objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles. For example, the following deployment definition creates a replica set to bring up one hello-openshift pod: Deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80 8.1.3. DeploymentConfig objects Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. Building on replication controllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfig objects. In the simplest case, a DeploymentConfig object creates a new replication controller and lets it start up pods. However, OpenShift Container Platform deployments from DeploymentConfig objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller. The DeploymentConfig deployment system provides the following capabilities: A DeploymentConfig object, which is a template for running applications. Triggers that drive automated deployments in response to events. User-customizable deployment strategies to transition from the version to the new version. A strategy runs inside a pod commonly referred as the deployment process. A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment. Versioning of your application to support rollbacks either manually or automatically in case of deployment failure. Manual replication scaling and autoscaling. When you create a DeploymentConfig object, a replication controller is created representing the DeploymentConfig object's pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one. Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally. The OpenShift Container Platform DeploymentConfig object defines the following details: The elements of a ReplicationController definition. Triggers for creating a new deployment automatically. The strategy for transitioning between deployments. Lifecycle hooks. Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the replication controller is retained to enable easy rollback if needed. Example DeploymentConfig definition apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3 1 A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration. 2 An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream. 3 The default Rolling strategy makes a downtime-free transition between deployments. 8.1.4. Comparing Deployment and DeploymentConfig objects Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. The following sections go into more detail on the differences between the two object types to further help you decide which type to use. Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. 8.1.4.1. Design One important difference between Deployment and DeploymentConfig objects is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod. However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs. 8.1.4.2. Deployment-specific features Rollover The deployment process for Deployment objects is driven by a controller loop, in contrast to DeploymentConfig objects that use deployer pods for every new rollout. This means that the Deployment object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one. DeploymentConfig objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for Deployment objects. Proportional scaling Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a Deployment object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set. DeploymentConfig objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller. Pausing mid-rollout Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes. 8.1.4.3. DeploymentConfig object-specific features Automatic rollbacks Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure. Triggers Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment: USD oc rollout pause deployments/<name> Lifecycle hooks Deployments do not yet support any lifecycle hooks. Custom strategies Deployments do not support user-specified custom deployment strategies. 8.2. Managing deployment processes 8.2.1. Managing DeploymentConfig objects Important As of OpenShift Container Platform 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed. Instead, use Deployment objects or another alternative to provide declarative updates for pods. DeploymentConfig objects can be managed from the OpenShift Container Platform web console's Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated. 8.2.1.1. Starting a deployment You can start a rollout to begin the deployment process of your application. Procedure To start a new deployment process from an existing DeploymentConfig object, run the following command: USD oc rollout latest dc/<name> Note If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed. 8.2.1.2. Viewing a deployment You can view a deployment to get basic information about all the available revisions of your application. Procedure To show details about all recently created replication controllers for the provided DeploymentConfig object, including any currently running deployment process, run the following command: USD oc rollout history dc/<name> To view details specific to a revision, add the --revision flag: USD oc rollout history dc/<name> --revision=1 For more detailed information about a DeploymentConfig object and its latest revision, use the oc describe command: USD oc describe dc <name> 8.2.1.3. Retrying a deployment If the current revision of your DeploymentConfig object failed to deploy, you can restart the deployment process. Procedure To restart a failed deployment process: USD oc rollout retry dc/<name> If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried. Note Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed. 8.2.1.4. Rolling back a deployment Rollbacks revert an application back to a revision and can be performed using the REST API, the CLI, or the web console. Procedure To rollback to the last successful deployed revision of your configuration: USD oc rollout undo dc/<name> The DeploymentConfig object's template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with --to-revision , then the last successfully deployed revision is used. Image change triggers on the DeploymentConfig object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers: USD oc set triggers dc/<name> --auto Note Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. 8.2.1.5. Executing commands inside a container You can add a command to a container, which modifies the container's startup behavior by overruling the image's ENTRYPOINT . This is different from a lifecycle hook, which instead can be run once per deployment at a specified time. Procedure Add the command parameters to the spec field of the DeploymentConfig object. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist). kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: template: # ... spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar # ... 8.2.1.6. Viewing deployment logs Procedure To stream the logs of the latest revision for a given DeploymentConfig object: USD oc logs -f dc/<name> If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application. You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually: USD oc logs --version=1 dc/<name> 8.2.1.7. Deployment triggers A DeploymentConfig object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster. Warning If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually. Config change deployment triggers The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the DeploymentConfig object. Note If a config change trigger is defined on a DeploymentConfig object, the first replication controller is automatically created soon after the DeploymentConfig object itself is created and it is not paused. Config change deployment trigger kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... triggers: - type: "ConfigChange" Image change deployment triggers The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed). Image change deployment trigger kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... triggers: - type: "ImageChange" imageChangeParams: automatic: true 1 from: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" namespace: "myproject" containerNames: - "helloworld" 1 If the imageChangeParams.automatic field is set to false , the trigger is disabled. With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the DeploymentConfig object's helloworld container, a new replication controller is created using the new image for the helloworld container. Note If an image change trigger is defined on a DeploymentConfig object (with a config change trigger and automatic=false , or with automatic=true ) and the image stream tag pointed by the image change trigger does not exist yet, the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the image stream tag. 8.2.1.7.1. Setting deployment triggers Procedure You can set deployment triggers for a DeploymentConfig object using the oc set triggers command. For example, to set a image change trigger, use the following command: USD oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name> 8.2.1.8. Setting deployment resources A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits. Note The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies. Procedure In the following example, each of resources , cpu , memory , and ephemeral-storage is optional: kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2 ephemeral-storage: "1Gi" 3 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). 3 ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the deployment process. To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota. Additional resources For more information about resource limits and requests, see Understanding managing application memory . 8.2.1.9. Scaling manually In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them. Note Pods can also be auto-scaled using the oc autoscale command. Procedure To manually scale a DeploymentConfig object, use the oc scale command. For example, the following command sets the replicas in the frontend DeploymentConfig object to 3 . USD oc scale dc frontend --replicas=3 The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig object frontend . 8.2.1.10. Accessing private repositories from DeploymentConfig objects You can add a secret to your DeploymentConfig object so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method. Procedure Create a new project. Navigate to Workloads Secrets . Create a secret that contains credentials for accessing a private image repository. Navigate to Workloads DeploymentConfigs . Create a DeploymentConfig object. On the DeploymentConfig object editor page, set the Pull Secret and save your changes. 8.2.1.11. Assigning pods to specific nodes You can use node selectors in conjunction with labeled nodes to control pod placement. Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further. Procedure To add a node selector when creating a pod, edit the Pod configuration, and add the nodeSelector value. This can be added to a single Pod configuration, or in a Pod template: apiVersion: v1 kind: Pod metadata: name: my-pod # ... spec: nodeSelector: disktype: ssd # ... Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator. For example, if a project has the type=user-node and region=east labels added to a project by the cluster administrator, and you add the above disktype: ssd label to a pod, the pod is only ever scheduled on nodes that have all three labels. Note Labels can only be set to one value, so setting a node selector of region=west in a Pod configuration that has region=east as the administrator-set default, results in a pod that will never be scheduled. 8.2.1.12. Running a pod with a different service account You can run a pod with a service account other than the default. Procedure Edit the DeploymentConfig object: USD oc edit dc/<deployment_config> Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use: apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc # ... spec: # ... securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account> 8.3. Using deployment strategies Deployment strategies are used to change or upgrade applications without downtime so that users barely notice a change. Because users generally access applications through a route handled by a router, deployment strategies can focus on DeploymentConfig object features or routing features. Strategies that focus on DeploymentConfig object features impact all routes that use the application. Strategies that use router features target individual routes. Most deployment strategies are supported through the DeploymentConfig object, and some additional strategies are supported through router features. 8.3.1. Choosing a deployment strategy Consider the following when choosing a deployment strategy: Long-running connections must be handled gracefully. Database conversions can be complex and must be done and rolled back along with the application. If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition. You must have the infrastructure to do this. If you have a non-isolated test environment, you can break both new and old versions. A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the DeploymentConfig object retries to run the pod until it times out. The default timeout is 10m , a value set in TimeoutSeconds in dc.spec.strategy.*params . 8.3.2. Rolling strategy A rolling deployment slowly replaces instances of the version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig object. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted. When to use a rolling deployment: When you want to take no downtime during an application update. When your application supports having old code and new code running at the same time. A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility. Example rolling strategy definition kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: "20%" 4 maxUnavailable: "10%" 5 pre: {} 6 post: {} 1 The time to wait between individual pod updates. If unspecified, this value defaults to 1 . 2 The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1 . 3 The time to wait for a scaling event before giving up. Optional; the default is 600 . Here, giving up means automatically rolling back to the complete deployment. 4 maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure. 5 maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure. 6 pre and post are both lifecycle hooks. The rolling strategy: Executes any pre lifecycle hook. Scales up the new replication controller based on the surge count. Scales down the old replication controller based on the max unavailable count. Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero. Executes any post lifecycle hook. Important When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure. The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10% ) or an absolute value (e.g., 2 ). The default value for both is 25% . These parameters allow the deployment to be tuned for availability and speed. For example: maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up. maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update). maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss. Generally, if you want fast rollouts, use maxSurge . If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable . Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. 8.3.2.1. Canary deployments All rolling deployments in OpenShift Container Platform are canary deployments ; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig object will be automatically rolled back. The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy. 8.3.2.2. Creating a rolling deployment Rolling deployments are the default type in OpenShift Container Platform. You can create a rolling deployment using the CLI. Procedure Create an application based on the example deployment images found in Quay.io : USD oc new-app quay.io/openshifttest/deployment-example:latest Note This image does not expose any ports. If you want to expose your applications over an external LoadBalancer service or enable access to the application over the public internet, create a service by using the oc expose dc/deployment-example --port=<port> command after completing this procedure. If you have the router installed, make the application available via a route or use the service IP directly. USD oc expose svc/deployment-example Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image. Scale the DeploymentConfig object up to three replicas: USD oc scale dc/deployment-example --replicas=3 Trigger a new deployment automatically by tagging a new version of the example as the latest tag: USD oc tag deployment-example:v2 deployment-example:latest In your browser, refresh the page until you see the v2 image. When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1: USD oc describe dc deployment-example During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as ready (by passing their readiness check), the deployment process continues. If the pods do not become ready, the process aborts, and the deployment rolls back to its version. 8.3.2.3. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.2.4. Starting a rolling deployment using the Developer perspective You can upgrade an application by starting a rolling deployment. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure In the Topology view, click the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one. Figure 8.1. Rolling update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 8.3.3. Recreate strategy The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process. Example recreate strategy definition kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift # ... spec: # ... strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {} 1 recreateParams are optional. 2 pre , mid , and post are lifecycle hooks. The recreate strategy: Executes any pre lifecycle hook. Scales down the deployment to zero. Executes any mid lifecycle hook. Scales up the new deployment. Executes any post lifecycle hook. Important During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure. When to use a recreate deployment: When you must run migrations or other data transformations before your new code starts. When you do not support having new and old versions of your application code running at the same time. When you want to use a RWO volume, which is not supported being shared between multiple replicas. A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time. 8.3.3.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.3.2. Starting a recreate deployment using the Developer perspective You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console. Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To switch to a recreate update strategy and to upgrade an application: Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application. In the YAML editor, change the spec.strategy.type to Recreate and click Save . In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate . Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version. Figure 8.2. Recreate update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 8.3.4. Custom strategy The custom strategy allows you to provide your own deployment behavior. Example custom strategy definition kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1 In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image's Dockerfile . The optional environment variables provided are added to the execution environment of the strategy process. Additionally, OpenShift Container Platform provides the following environment variables to the deployment process: Environment variable Description OPENSHIFT_DEPLOYMENT_NAME The name of the new deployment, a replication controller. OPENSHIFT_DEPLOYMENT_NAMESPACE The name space of the new deployment. The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user. Alternatively, use the customParams object to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default OpenShift Container Platform deployer image is used instead: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc # ... spec: # ... strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete This results in following deployment: Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication. 8.3.4.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.5. Lifecycle hooks The rolling and recreate strategies support lifecycle hooks , or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy: Example pre lifecycle hook pre: failurePolicy: Abort execNewPod: {} 1 1 execNewPod is a pod-based lifecycle hook. Every hook has a failure policy , which defines the action the strategy should take when a hook failure is encountered: Abort The deployment process will be considered a failure if the hook fails. Retry The hook execution should be retried until it succeeds. Ignore Any hook failure should be ignored and the deployment should proceed. Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field. Pod-based lifecycle hook Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a DeploymentConfig object. The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4 1 The helloworld name refers to spec.template.spec.containers[0].name . 2 This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image. 3 env is an optional set of environment variables for the hook container. 4 volumes is an optional set of volume references for the hook container. In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod has the following properties: The hook command is /usr/bin/command arg1 arg2 . The hook container has the CUSTOM_VAR1=custom_value1 environment variable. The hook failure policy is Abort , meaning the deployment process fails if the hook fails. The hook pod inherits the data volume from the DeploymentConfig object pod. 8.3.5.1. Setting lifecycle hooks You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI. Procedure Use the oc set deployment-hook command to set the type of hook you want: --pre , --mid , or --post . For example, to set a pre-deployment hook: USD oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2 8.4. Using route-based deployment strategies Deployment strategies provide a way for the application to evolve. Some strategies use Deployment objects to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with Deployment objects to impact specific routes. The most common route-based strategy is to use a blue-green deployment . The new version (the green version) is brought up for testing and evaluation, while the users still use the stable version (the blue version). When ready, the users are switched to the green version. If a problem arises, you can switch back to the blue version. Alternatively, you can use an A/B versions strategy in which both versions are active at the same time. With this strategy, some users can use version A , and other users can use version B . You can use this strategy to experiment with user interface changes or other features in order to get user feedback. You can also use it to verify proper operation in a production context where problems impact a limited number of users. A canary deployment tests the new version but when a problem is detected it quickly falls back to the version. This can be done with both of the above strategies. The route-based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled. 8.4.1. Proxy shards and traffic splitting In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard , which forwards or splits the traffic it receives to a separate service or application running elsewhere. In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes. Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities. 8.4.2. N-1 compatibility Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem. This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user's browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it. For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional. One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment. 8.4.3. Graceful termination OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit. On shutdown, OpenShift Container Platform sends a TERM signal to the processes in the container. Application code, on receiving SIGTERM , stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed, or gracefully terminate individual connections at the opportunity, before exiting. After the graceful termination period expires, a process that has not exited is sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and can be customized per application as necessary. 8.4.4. Blue-green deployments Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the blue version) to the newer version (the green version). You can use a rolling strategy or switch services in a route. Because many applications depend on persistent data, you must have an application that supports N-1 compatibility , which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer. Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version. 8.4.4.1. Setting up a blue-green deployment Blue-green deployments use two Deployment objects. Both are running, and the one in production depends on the service the route specifies, with each Deployment object exposed to a different service. Note Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (green) version is live. If necessary, you can roll back to the older (blue) version by switching the service back to the version. Procedure Create two independent application components. Create a copy of the example application running the v1 image under the example-blue service: USD oc new-app openshift/deployment-example:v1 --name=example-blue Create a second copy that uses the v2 image under the example-green service: USD oc new-app openshift/deployment-example:v2 --name=example-green Create a route that points to the old service: USD oc expose svc/example-blue --name=bluegreen-example Browse to the application at bluegreen-example-<project>.<router_domain> to verify you see the v1 image. Edit the route and change the service name to example-green : USD oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-green"}}}' To verify that the route has changed, refresh the browser until you see the v2 image. 8.4.5. A/B deployments The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the version. As you adjust the request load on each version, the number of pods in each service might have to be scaled as well to provide the expected performance. In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user's reaction to the different versions to inform design decisions. For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together. OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI. 8.4.5.1. Load balancing for A/B testing The user sets up a route with multiple services. Each service handles a version of the application. Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights . The weight for each service is distributed to the service's endpoints so that the sum of the endpoint weights is the service weight . The route can have up to four services. The weight for the service can be between 0 and 256 . When the weight is 0 , the service does not participate in load balancing but continues to serve existing persistent connections. When the service weight is not 0 , each endpoint has a minimum weight of 1 . Because of this, a service with a lot of endpoints can end up with higher weight than intended. In this case, reduce the number of pods to get the expected load balance weight . Procedure To set up the A/B environment: Create the two applications and give them different names. Each creates a Deployment object. The applications are versions of the same program; one is usually the current production version and the other the proposed new version. Create the first application. The following example creates an application called ab-example-a : USD oc new-app openshift/deployment-example --name=ab-example-a Create the second application: USD oc new-app openshift/deployment-example:v2 --name=ab-example-b Both applications are deployed and services are created. Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version. USD oc expose svc/ab-example-a Browse to the application at ab-example-a.<project>.<router_domain> to verify that you see the expected version. When you deploy the route, the router balances the traffic according to the weights specified for the services. At this point, there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights brings the A/B setup to life. This can be done by the oc set route-backends command or by editing the route. Note When using alternateBackends , also use the roundrobin load balancing strategy to ensure requests are distributed as expected to the services based on weight. roundrobin can be set for a route by using a route annotation. See the Additional resources section for more information about route annotations. Setting the oc set route-backend to 0 means the service does not participate in load balancing, but continues to serve existing persistent connections. Note Changes to the route just change the portion of traffic to the various services. You might have to scale the deployment to adjust the number of pods to handle the anticipated loads. To edit the route, run: USD oc edit route <route_name> Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin # ... spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 # ... 8.4.5.1.1. Managing weights of an existing route using the web console Procedure Navigate to the Networking Routes page. Click the Actions menu to the route you want to edit and select Edit Route . Edit the YAML file. Update the weight to be an integer between 0 and 256 that specifies the relative weight of the target against other target reference objects. The value 0 suppresses requests to this back end. The default is 100 . Run oc explain routes.spec.alternateBackends for more information about the options. Click Save . 8.4.5.1.2. Managing weights of an new route using the web console Navigate to the Networking Routes page. Click Create Route . Enter the route Name . Select the Service . Click Add Alternate Service . Enter a value for Weight and Alternate Service Weight . Enter a number between 0 and 255 that depicts relative weight compared with other targets. The default is 100 . Select the Target Port . Click Create . 8.4.5.1.3. Managing weights using the CLI Procedure To manage the services and corresponding weights load balanced by the route, use the oc set route-backends command: USD oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options] For example, the following sets ab-example-a as the primary service with weight=198 and ab-example-b as the first alternate service with a weight=2 : USD oc set route-backends ab-example ab-example-a=198 ab-example-b=2 This means 99% of traffic is sent to service ab-example-a and 1% to service ab-example-b . This command does not scale the deployment. You might be required to do so to have enough pods to handle the request load. Run the command with no flags to verify the current configuration: USD oc set route-backends ab-example Example output NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%) To override the default values for the load balancing algorithm, adjust the annotation on the route by setting the algorithm to roundrobin . For a route on OpenShift Container Platform, the default load balancing algorithm is set to random or source values. To set the algorithm to roundrobin , run the command: USD oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin For Transport Layer Security (TLS) passthrough routes, the default value is source . For all other routes, the default is random . To alter the weight of an individual service relative to itself or to the primary service, use the --adjust flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed. The following example alters the weight of ab-example-a and ab-example-b services: USD oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 Alternatively, alter the weight of a service by specifying a percentage: USD oc set route-backends ab-example --adjust ab-example-b=5% By specifying + before the percentage declaration, you can adjust a weighting relative to the current setting. For example: USD oc set route-backends ab-example --adjust ab-example-b=+15% The --equal flag sets the weight of all services to 100 : USD oc set route-backends ab-example --equal The --zero flag sets the weight of all services to 0 . All requests then return with a 503 error. Note Not all routers may support multiple or weighted backends. 8.4.5.1.4. One service, multiple Deployment objects Procedure Create a new application, adding a label ab-example=true that will be common to all shards: USD oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\=shardA USD oc delete svc/ab-example-a The application is deployed and a service is created. This is the first shard. Make the application available via a route, or use the service IP directly: USD oc expose deployment ab-example-a --name=ab-example --selector=ab-example\=true USD oc expose service ab-example Browse to the application at ab-example-<project_name>.<router_domain> to verify you see the v1 image. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables: USD oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red" --as-deployment-config=true USD oc delete svc/ab-example-b At this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you. To force your browser to one or the other shard: Use the oc scale command to reduce replicas of ab-example-a to 0 . USD oc scale dc/ab-example-a --replicas=0 Refresh your browser to show v2 and shard B (in red). Scale ab-example-a to 1 replica and ab-example-b to 0 : USD oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0 Refresh your browser to show v1 and shard A (in blue). If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the SUBTITLE environment variable in either Deployment object: USD oc edit dc/ab-example-a or USD oc edit dc/ab-example-b 8.4.6. Additional resources Route-specific annotations .
[ "apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always", "apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80", "apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3", "oc rollout pause deployments/<name>", "oc rollout latest dc/<name>", "oc rollout history dc/<name>", "oc rollout history dc/<name> --revision=1", "oc describe dc <name>", "oc rollout retry dc/<name>", "oc rollout undo dc/<name>", "oc set triggers dc/<name> --auto", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar", "oc logs -f dc/<name>", "oc logs --version=1 dc/<name>", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"", "oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"", "oc scale dc frontend --replicas=3", "apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd", "oc edit dc/<deployment_config>", "apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}", "oc new-app quay.io/openshifttest/deployment-example:latest", "oc expose svc/deployment-example", "oc scale dc/deployment-example --replicas=3", "oc tag deployment-example:v2 deployment-example:latest", "oc describe dc deployment-example", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete", "Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete", "pre: failurePolicy: Abort execNewPod: {} 1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4", "oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2", "oc new-app openshift/deployment-example:v1 --name=example-blue", "oc new-app openshift/deployment-example:v2 --name=example-green", "oc expose svc/example-blue --name=bluegreen-example", "oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'", "oc new-app openshift/deployment-example --name=ab-example-a", "oc new-app openshift/deployment-example:v2 --name=ab-example-b", "oc expose svc/ab-example-a", "oc edit route <route_name>", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15", "oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]", "oc set route-backends ab-example ab-example-a=198 ab-example-b=2", "oc set route-backends ab-example", "NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)", "oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin", "oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10", "oc set route-backends ab-example --adjust ab-example-b=5%", "oc set route-backends ab-example --adjust ab-example-b=+15%", "oc set route-backends ab-example --equal", "oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA", "oc delete svc/ab-example-a", "oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true", "oc expose service ab-example", "oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true", "oc delete svc/ab-example-b", "oc scale dc/ab-example-a --replicas=0", "oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0", "oc edit dc/ab-example-a", "oc edit dc/ab-example-b" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/deployments
Chapter 8. Installing a cluster on GCP into an existing VPC
Chapter 8. Installing a cluster on GCP into an existing VPC In OpenShift Container Platform version 4.13, you can install a cluster into an existing Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 8.2. About using a custom VPC In OpenShift Container Platform 4.13, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in Google Cloud Platform (GCP). By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking for the subnets. 8.2.1. Requirements for using your VPC The union of the VPC CIDR block and the machine network CIDR must be non-empty. The subnets must be within the machine network. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 8.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide one subnet for control-plane machines and one subnet for compute machines. The subnet's CIDRs belong to the machine CIDR that you specified. 8.2.3. Division of permissions Some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. 8.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 8.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 8.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough or Manual . Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Note If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough or Manual . Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 8.6.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 8.4. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set platform.gcp.networkProjectID with the name of the GCP project that contains the shared VPC. String. platform.gcp.networkProjectID Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. String. platform.gcp.projectID The name of the GCP project where the installation program installs the cluster. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.controlPlaneSubnet The name of the existing subnet where you want to deploy your control plane machines. The subnet name. platform.gcp.computeSubnet The name of the existing subnet where you want to deploy your compute machines. The subnet name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installation program to copy the source image before use. platform.gcp.defaultMachinePlatform.zones The availability zones where the installation program creates machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.defaultMachinePlatform.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.defaultMachinePlatform.osDisk.diskType The GCP disk type . Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. Compute nodes can be either type. platform.gcp.defaultMachinePlatform.osImage.project Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for both types of machines. String. The name of GCP project where the image is located. platform.gcp.defaultMachinePlatform.osImage.name The name of the custom RHCOS image for the installation program to use to boot control plane and compute machines. If you use platform.gcp.defaultMachinePlatform.osImage.project , this field is required. String. The name of the RHCOS image. platform.gcp.defaultMachinePlatform.tags Optional. Additional network tags to add to the control plane and compute machines. One or more strings, for example network-tag1 . platform.gcp.defaultMachinePlatform.type The GCP machine type for control plane and compute machines. The GCP machine type, for example n1-standard-4 . platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for machine disk encryption. The encryption key name. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.keyRing The name of the Key Management Service (KMS) key ring to which the KMS key belongs. The KMS key ring name. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.location The GCP location in which the KMS key ring exists. The GCP location. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.projectID The ID of the project in which the KMS key ring exists. This value defaults to the value of the platform.gcp.projectID parameter if it is not set. The GCP project ID. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . platform.gcp.defaultMachinePlatform.secureBoot Whether to enable Shielded VM secure boot for all machines in the cluster. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . platform.gcp.defaultMachinePlatform.confidentialCompute Whether to use Confidential VMs for all machines in the cluster. Confidential VMs provide encryption for data during processing. For more information on Confidential computing, see Google's documentation on Confidential computing . Enabled or Disabled . The default value is Disabled . platform.gcp.defaultMachinePlatform.onHostMaintenance Specifies the behavior of all VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . controlPlane.platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). This value applies to control plane machines. Any integer between 16 and 65536. controlPlane.platform.gcp.osDisk.diskType The GCP disk type for control plane machines. Control plane machines must use the pd-ssd disk type, which is the default. controlPlane.platform.gcp.osImage.project Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for control plane machines only. String. The name of GCP project where the image is located. controlPlane.platform.gcp.osImage.name The name of the custom RHCOS image for the installation program to use to boot control plane machines. If you use controlPlane.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. controlPlane.platform.gcp.tags Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for control plane machines. One or more strings, for example control-plane-tag1 . controlPlane.platform.gcp.type The GCP machine type for control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . controlPlane.platform.gcp.zones The availability zones where the installation program creates control plane machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . controlPlane.platform.gcp.secureBoot Whether to enable Shielded VM secure boot for control plane machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . controlPlane.platform.gcp.confidentialCompute Whether to enable Confidential VMs for control plane machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . controlPlane.platform.gcp.onHostMaintenance Specifies the behavior of control plane VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . compute.platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). This value applies to compute machines. Any integer between 16 and 65536. compute.platform.gcp.osDisk.diskType The GCP disk type for compute machines. Either the default pd-ssd or the pd-standard disk type. compute.platform.gcp.osImage.project Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for compute machines only. String. The name of GCP project where the image is located. compute.platform.gcp.osImage.name The name of the custom RHCOS image for the installation program to use to boot compute machines. If you use compute.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. compute.platform.gcp.tags Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for compute machines. One or more strings, for example compute-network-tag1 . compute.platform.gcp.type The GCP machine type for compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . compute.platform.gcp.zones The availability zones where the installation program creates compute machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . compute.platform.gcp.secureBoot Whether to enable Shielded VM secure boot for compute machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . compute.platform.gcp.confidentialCompute Whether to enable Confidential VMs for compute machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . compute.platform.gcp.onHostMaintenance Specifies the behavior of compute VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . 8.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.6.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.1. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 8.6.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 8.6.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 8.6.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Important Confidential Computing is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Due to a known issue in OpenShift Container Platform 4.13.3 and earlier versions, you cannot use persistent volume storage on a cluster with Confidential VMs on Google Cloud Platform (GCP). This issue was resolved in OpenShift Container Platform 4.13.4. For more information, see OCPBUGS-11768 . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 8.6.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 15 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 16 region: us-central1 17 defaultMachinePlatform: tags: 18 - global-tag1 - global-tag2 osImage: 19 project: example-project-name name: example-image-name network: existing_vpc 20 controlPlaneSubnet: control_plane_subnet 21 computeSubnet: compute_subnet 22 pullSecret: '{"auths": ...}' 23 fips: false 24 sshKey: ssh-ed25519 AAAA... 25 1 14 16 17 23 Required. The installation program prompts you for this value. 2 8 If you do not provide these parameters and values, the installation program provides the default value. 3 9 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 10 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 11 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 6 12 18 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 7 13 19 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image for the installation program to use to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 15 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 20 Specify the name of an existing VPC. 21 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 22 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 24 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 25 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 8.6.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 8.6.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 6 - control-plane-tag1 - control-plane-tag2 osImage: 7 project: example-project-name name: example-image-name replicas: 3 compute: 8 9 - hyperthreading: Enabled 10 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 11 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 12 - compute-tag1 - compute-tag2 osImage: 13 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 14 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 15 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 16 region: us-central1 17 defaultMachinePlatform: tags: 18 - global-tag1 - global-tag2 osImage: 19 project: example-project-name name: example-image-name network: existing_vpc 20 controlPlaneSubnet: control_plane_subnet 21 computeSubnet: compute_subnet 22 pullSecret: '{\"auths\": ...}' 23 fips: false 24 sshKey: ssh-ed25519 AAAA... 25", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_gcp/installing-gcp-vpc
Chapter 57. JSON Jackson
Chapter 57. JSON Jackson Jackson is a Data Format which uses the Jackson Library from("activemq:My.Queue"). marshal().json(JsonLibrary.Jackson). to("mqseries:Another.Queue"); 57.1. Dependencies When using json-jackson with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-starter</artifactId> </dependency> 57.2. Jackson Options The JSON Jackson dataformat supports 20 options, which are listed below. Name Default Java Type Description objectMapper String Lookup and use the existing ObjectMapper with the given id when using Jackson. useDefaultObjectMapper Boolean Whether to lookup and use default Jackson ObjectMapper from the registry. prettyPrint Boolean To enable pretty printing output nicely formatted. Is by default false. unmarshalType String Class name of the java type to use when unmarshalling. jsonView String When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. allowJmsType Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionType String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList Boolean To unmarshal to a List of Map or a List of Pojo. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. allowUnmarshallType Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. timezone String If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. autoDiscoverObjectMapper Boolean If set to true then Jackson will lookup for an objectMapper into the registry. contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. schemaResolver String Optional schema resolver used to lookup schemas for the data in transit. autoDiscoverSchemaResolver Boolean When not disabled, the SchemaResolver will be looked up into the registry. namingStrategy String If set then Jackson will use the the defined Property Naming Strategy.Possible values are: LOWER_CAMEL_CASE, LOWER_DOT_CASE, LOWER_CASE, KEBAB_CASE, SNAKE_CASE and UPPER_CAMEL_CASE. 57.3. Using custom ObjectMapper You can configure JacksonDataFormat to use a custom ObjectMapper in case you need more control of the mapping configuration. If you setup a single ObjectMapper in the registry, then Camel will automatic lookup and use this ObjectMapper . For example if you use Spring Boot, then Spring Boot can provide a default ObjectMapper for you if you have Spring MVC enabled. And this would allow Camel to detect that there is one bean of ObjectMapper class type in the Spring Boot bean registry and then use it. When this happens you should set a INFO logging from Camel. 57.4. Using Jackson for automatic type conversion The camel-jackson module allows integrating Jackson as a Type Converter . This works in a similar way to JAXB that integrates with Camel's type converter. To use this camel-jackson must be enabled, which is done by setting the following options on the CamelContext global options, as shown: @Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext context) { // Enable Jackson JSON type converter. context.getGlobalOptions().put(JacksonConstants.ENABLE_TYPE_CONVERTER, "true"); // Allow Jackson JSON to convert to pojo types also // (by default Jackson only converts to String and other simple types) getContext().getGlobalOptions().put(JacksonConstants.TYPE_CONVERTER_TO_POJO, "true"); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; } The camel-jackson type converter integrates with JAXB which means you can annotate POJO class with JAXB annotations that Jackson can use. You can also use Jackson's own annotations on your POJO classes. 57.5. Spring Boot Auto-Configuration The component supports 21 options, which are listed below. Name Description Default Type camel.dataformat.json-jackson.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.json-jackson.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.json-jackson.auto-discover-object-mapper If set to true then Jackson will lookup for an objectMapper into the registry. false Boolean camel.dataformat.json-jackson.auto-discover-schema-resolver When not disabled, the SchemaResolver will be looked up into the registry. true Boolean camel.dataformat.json-jackson.collection-type Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.json-jackson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.json-jackson.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.json-jackson.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.json-jackson.enabled Whether to enable auto configuration of the json-jackson data format. This is enabled by default. Boolean camel.dataformat.json-jackson.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. String camel.dataformat.json-jackson.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. String camel.dataformat.json-jackson.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.json-jackson.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.json-jackson.naming-strategy If set then Jackson will use the the defined Property Naming Strategy.Possible values are: LOWER_CAMEL_CASE, LOWER_DOT_CASE, LOWER_CASE, KEBAB_CASE, SNAKE_CASE and UPPER_CAMEL_CASE. String camel.dataformat.json-jackson.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.json-jackson.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.json-jackson.schema-resolver Optional schema resolver used to lookup schemas for the data in transit. String camel.dataformat.json-jackson.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. String camel.dataformat.json-jackson.unmarshal-type Class name of the java type to use when unmarshalling. String camel.dataformat.json-jackson.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.json-jackson.use-list To unmarshal to a List of Map or a List of Pojo. false Boolean
[ "from(\"activemq:My.Queue\"). marshal().json(JsonLibrary.Jackson). to(\"mqseries:Another.Queue\");", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-starter</artifactId> </dependency>", "@Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext context) { // Enable Jackson JSON type converter. context.getGlobalOptions().put(JacksonConstants.ENABLE_TYPE_CONVERTER, \"true\"); // Allow Jackson JSON to convert to pojo types also // (by default Jackson only converts to String and other simple types) getContext().getGlobalOptions().put(JacksonConstants.TYPE_CONVERTER_TO_POJO, \"true\"); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-json-jackson-dataformat-starter
Chapter 5. OLMConfig [operators.coreos.com/v1]
Chapter 5. OLMConfig [operators.coreos.com/v1] Description OLMConfig is a resource responsible for configuring OLM. Type object Required metadata 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OLMConfigSpec is the spec for an OLMConfig resource. status object OLMConfigStatus is the status for an OLMConfig resource. 5.1.1. .spec Description OLMConfigSpec is the spec for an OLMConfig resource. Type object Property Type Description features object Features contains the list of configurable OLM features. 5.1.2. .spec.features Description Features contains the list of configurable OLM features. Type object Property Type Description disableCopiedCSVs boolean DisableCopiedCSVs is used to disable OLM's "Copied CSV" feature for operators installed at the cluster scope, where a cluster scoped operator is one that has been installed in an OperatorGroup that targets all namespaces. When reenabled, OLM will recreate the "Copied CSVs" for each cluster scoped operator. packageServerSyncInterval string PackageServerSyncInterval is used to define the sync interval for packagerserver pods. Packageserver pods periodically check the status of CatalogSources; this specifies the period using duration format (e.g. "60m"). For this parameter, only hours ("h"), minutes ("m"), and seconds ("s") may be specified. When not specified, the period defaults to the value specified within the packageserver. 5.1.3. .status Description OLMConfigStatus is the status for an OLMConfig resource. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 5.1.4. .status.conditions Description Type array 5.1.5. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 5.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/olmconfigs DELETE : delete collection of OLMConfig GET : list objects of kind OLMConfig POST : create an OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name} DELETE : delete an OLMConfig GET : read the specified OLMConfig PATCH : partially update the specified OLMConfig PUT : replace the specified OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name}/status GET : read status of the specified OLMConfig PATCH : partially update status of the specified OLMConfig PUT : replace status of the specified OLMConfig 5.2.1. /apis/operators.coreos.com/v1/olmconfigs HTTP method DELETE Description delete collection of OLMConfig Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OLMConfig Table 5.2. HTTP responses HTTP code Reponse body 200 - OK OLMConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an OLMConfig Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body OLMConfig schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 202 - Accepted OLMConfig schema 401 - Unauthorized Empty 5.2.2. /apis/operators.coreos.com/v1/olmconfigs/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the OLMConfig HTTP method DELETE Description delete an OLMConfig Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OLMConfig Table 5.9. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OLMConfig Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OLMConfig Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body OLMConfig schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty 5.2.3. /apis/operators.coreos.com/v1/olmconfigs/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the OLMConfig HTTP method GET Description read status of the specified OLMConfig Table 5.16. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OLMConfig Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OLMConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body OLMConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty
[ "type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`", "// other fields }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operatorhub_apis/olmconfig-operators-coreos-com-v1
function::set_kernel_pointer
function::set_kernel_pointer Name function::set_kernel_pointer - Writes a pointer value to kernel memory. Synopsis Arguments addr The kernel address to write the pointer to val The pointer which is to be written Description Writes the pointer value to a given kernel memory address. Reports an error when writing to the given address fails. Requires the use of guru mode (-g).
[ "set_kernel_pointer(addr:long,val:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-set-kernel-pointer
Chapter 4. Proactive authentication
Chapter 4. Proactive authentication Learn how to manage proactive authentication in Quarkus, including customizing settings and handling exceptions. Gain practical insights and strategies for various application scenarios. Proactive authentication is enabled in Quarkus by default. It ensures that all incoming requests with credentials are authenticated, even if the target page does not require authentication. As a result, requests with invalid credentials are rejected, even if the target page is public. You can turn off this default behavior if you want to authenticate only when the target page requires it. To turn off proactive authentication so that authentication occurs only when the target page requires it, modify the application.properties configuration file as follows: quarkus.http.auth.proactive=false If you turn off proactive authentication, the authentication process runs only when an identity is requested. An identity can be requested because of security rules that require the user to authenticate or because programmatic access to the current identity is required. If proactive authentication is used, accessing SecurityIdentity is a blocking operation. This is because authentication might have yet to happen, and accessing SecurityIdentity might require calls to external systems, such as databases, that might block the operation. For blocking applications, this is not an issue. However, if you have disabled authentication in a reactive application, this fails because you cannot do blocking operations on the I/O thread. To work around this, you need to @Inject an instance of io.quarkus.security.identity.CurrentIdentityAssociation and call the Uni<SecurityIdentity> getDeferredIdentity(); method. Then, you can subscribe to the resulting Uni to be notified when authentication is complete and the identity is available. Note You can still access SecurityIdentity synchronously with public SecurityIdentity getIdentity() in Quarkus REST (formerly RESTEasy Reactive) from endpoints that are annotated with @RolesAllowed , @Authenticated , or with respective configuration authorization checks because authentication has already happened. The same is also valid for Reactive routes if a route response is synchronous. When proactive authentication is disabled, standard security annotations used on CDI beans do not function on an I/O thread if a secured method that is not void synchronously returns a value. This limitation arises from the necessity for these methods to access SecurityIdentity . The following example defines HelloResource and HelloService . Any GET request to /hello runs on the I/O thread and throws a BlockingOperationNotAllowedException exception. There is more than one way to fix the example: Switch to a worker thread by annotating the hello endpoint with @Blocking . Change the sayHello method return type by using a reactive or asynchronous data type. Move the @RolesAllowed annotation to the endpoint. This could be one of the safest ways because accessing SecurityIdentity from endpoint methods is never the blocking operation. import jakarta.annotation.security.PermitAll; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.smallrye.mutiny.Uni; @Path("/hello") @PermitAll public class HelloResource { @Inject HelloService helloService; @GET public Uni<String> hello() { return Uni.createFrom().item(helloService.sayHello()); } } import jakarta.annotation.security.RolesAllowed; import jakarta.enterprise.context.ApplicationScoped; @ApplicationScoped public class HelloService { @RolesAllowed("admin") public String sayHello() { return "Hello"; } } 4.1. Customize authentication exception responses You can use Jakarta REST ExceptionMapper to capture Quarkus Security authentication exceptions such as io.quarkus.security.AuthenticationFailedException . For example: package io.quarkus.it.keycloak; import jakarta.annotation.Priority; import jakarta.ws.rs.Priorities; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.core.UriInfo; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; import io.quarkus.security.AuthenticationFailedException; @Provider @Priority(Priorities.AUTHENTICATION) public class AuthenticationFailedExceptionMapper implements ExceptionMapper<AuthenticationFailedException> { @Context UriInfo uriInfo; @Override public Response toResponse(AuthenticationFailedException exception) { return Response.status(401).header("WWW-Authenticate", "Basic realm=\"Quarkus\"").build(); } } Caution Some HTTP authentication mechanisms must handle authentication exceptions themselves to create a correct authentication challenge. For example, io.quarkus.oidc.runtime.CodeAuthenticationMechanism , which manages OpenID Connect (OIDC) authorization code flow authentication, must build a correct redirect URL and set a state cookie. Therefore, avoid using custom exception mappers to customize authentication exceptions thrown by such mechanisms. Instead, a safer approach is to ensure that proactive authentication is enabled and to use Vert.x HTTP route failure handlers. This is because events come to the handler with the correct response status and headers. Then, you must only customize the response; for example: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import io.quarkus.security.AuthenticationFailedException; import io.vertx.core.Handler; import io.vertx.ext.web.Router; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class AuthenticationFailedExceptionHandler { public void init(@Observes Router router) { router.route().failureHandler(new Handler<RoutingContext>() { @Override public void handle(RoutingContext event) { if (event.failure() instanceof AuthenticationFailedException) { event.response().end("CUSTOMIZED_RESPONSE"); } else { event.(); } } }); } } 4.2. References Quarkus Security overview Quarkus Security architecture Authentication mechanisms in Quarkus Identity providers
[ "quarkus.http.auth.proactive=false", "import jakarta.annotation.security.PermitAll; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.smallrye.mutiny.Uni; @Path(\"/hello\") @PermitAll public class HelloResource { @Inject HelloService helloService; @GET public Uni<String> hello() { return Uni.createFrom().item(helloService.sayHello()); } }", "import jakarta.annotation.security.RolesAllowed; import jakarta.enterprise.context.ApplicationScoped; @ApplicationScoped public class HelloService { @RolesAllowed(\"admin\") public String sayHello() { return \"Hello\"; } }", "package io.quarkus.it.keycloak; import jakarta.annotation.Priority; import jakarta.ws.rs.Priorities; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.core.UriInfo; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; import io.quarkus.security.AuthenticationFailedException; @Provider @Priority(Priorities.AUTHENTICATION) public class AuthenticationFailedExceptionMapper implements ExceptionMapper<AuthenticationFailedException> { @Context UriInfo uriInfo; @Override public Response toResponse(AuthenticationFailedException exception) { return Response.status(401).header(\"WWW-Authenticate\", \"Basic realm=\\\"Quarkus\\\"\").build(); } }", "package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import io.quarkus.security.AuthenticationFailedException; import io.vertx.core.Handler; import io.vertx.ext.web.Router; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class AuthenticationFailedExceptionHandler { public void init(@Observes Router router) { router.route().failureHandler(new Handler<RoutingContext>() { @Override public void handle(RoutingContext event) { if (event.failure() instanceof AuthenticationFailedException) { event.response().end(\"CUSTOMIZED_RESPONSE\"); } else { event.next(); } } }); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/security_architecture/security-proactive-authentication
15.7. Password Management of GVFS Mounts
15.7. Password Management of GVFS Mounts A typical GVFS mount asks for credentials on its activation unless the resource allows anonymous authentication or does not require any at all. Presented in a standard GTK+ dialog, the user is able to choose whether the password should be saved or not. Procedure 15.5. Example: Authenticated Mount Process Open Files and activate the address bar by pressing Ctrl + L . Enter a well-formed URI string of a service that needs authentication (for example, sftp://localhost/ ). The credentials dialog is displayed, asking for a user name, password and password store options. Fill in the credentials and confirm. In case the persistent storage is selected, the password is saved in the user keyring. GNOME Keyring is a central place for secrets storage. It is encrypted and automatically unlocked on desktop session start using the password provided on login by default. If it is protected by a different password, the password is set at the first use. To manage the stored password and GNOME Keyring itself, the Seahorse application is provided. It allows individual records to be removed or passwords changed. For more information on Seahorse , consult the help manual for Seahorse embedded directly in the desktop.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/pswd-management
Chapter 10. Log storage
Chapter 10. Log storage 10.1. About log storage You can use an internal Loki or Elasticsearch log store on your cluster for storing logs, or you can use a ClusterLogForwarder custom resource (CR) to forward logs to an external store. 10.1.1. Log storage types Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Elasticsearch indexes incoming log records completely during ingestion. Loki indexes only a few fixed labels during ingestion and defers more complex parsing until after the logs have been stored. This means Loki can collect logs more quickly. 10.1.1.1. About the Elasticsearch log store The logging Elasticsearch instance is optimized and tested for short term storage, approximately seven days. If you want to retain your logs over a longer term, it is recommended you move the data to a third-party storage system. Elasticsearch organizes the log data from Fluentd into datastores, or indices , then subdivides each index into multiple pieces called shards , which it spreads across a set of Elasticsearch nodes in an Elasticsearch cluster. You can configure Elasticsearch to make copies of the shards, called replicas , which Elasticsearch also spreads across the Elasticsearch nodes. The ClusterLogging custom resource (CR) allows you to specify how the shards are replicated to provide data redundancy and resilience to failure. You can also specify how long the different types of logs are retained using a retention policy in the ClusterLogging CR. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. The Red Hat OpenShift Logging Operator and companion OpenShift Elasticsearch Operator ensure that each Elasticsearch node is deployed using a unique deployment that includes its own storage volume. You can use a ClusterLogging custom resource (CR) to increase the number of Elasticsearch nodes, as needed. See the Elasticsearch documentation for considerations involved in configuring storage. Note A highly-available Elasticsearch environment requires at least three Elasticsearch nodes, each on a different host. Role-based access control (RBAC) applied on the Elasticsearch indices enables the controlled access of the logs to the developers. Administrators can access all logs and developers can access only the logs in their projects. 10.1.2. Querying log stores You can query Loki by using the LogQL log query language . 10.1.3. Additional resources Loki components documentation Loki Object Storage documentation 10.2. Installing log storage You can use the OpenShift CLI ( oc ) or the OpenShift Dedicated web console to deploy a log store on your OpenShift Dedicated cluster. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 10.2.1. Deploying a Loki log store You can use the Loki Operator to deploy an internal Loki log store on your OpenShift Dedicated cluster. After install the Loki Operator, you must configure Loki object storage by creating a secret, and create a LokiStack custom resource (CR). 10.2.1.1. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. Important It is not possible to change the number 1x for the deployment size. Table 10.1. Loki sizing 1x.demo 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 Total CPU requests None 14 vCPUs 34 vCPUs 54 vCPUs Total CPU requests if using the ruler None 16 vCPUs 42 vCPUs 70 vCPUs Total memory requests None 31Gi 67Gi 139Gi Total memory requests if using the ruler None 35Gi 83Gi 171Gi Total disk requests 40Gi 430Gi 430Gi 590Gi Total disk requests if using the ruler 80Gi 750Gi 750Gi 910Gi 10.2.1.2. Installing Logging and the Loki Operator using the web console To install and configure logging on your OpenShift Dedicated cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OperatorHub within the web console. Prerequisites You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). You have administrator permissions. You have access to the OpenShift Dedicated web console. Procedure In the OpenShift Dedicated web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable or stable-x.y as the Update channel . Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Install the Red Hat OpenShift Logging Operator: In the OpenShift Dedicated web console, click Operators OperatorHub . Choose Red Hat OpenShift Logging from the list of available Operators, and click Install . Ensure that the A specific namespace on the cluster is selected under Installation Mode . Ensure that Operator recommended namespace is openshift-logging under Installed Namespace . Select Enable Operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. Select stable-5.y as the Update Channel . Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: "2022-06-01" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Important It is not possible to change the number 1x for the deployment size. Click Create . Create an OpenShift Logging instance: Switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition details page, select View Instances from the Actions menu. On the ClusterLoggings page, click Create ClusterLogging . You might have to refresh the page to load the data. In the YAML field, replace the code with the following: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Verification Go to Operators Installed Operators . Make sure the openshift-logging project is selected. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date . Note An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page. 10.2.1.3. Creating a secret for Loki object storage by using the web console To configure Loki object storage, you must create a secret. You can create a secret by using the OpenShift Dedicated web console. Prerequisites You have administrator permissions. You have access to the OpenShift Dedicated web console. You installed the Loki Operator. Procedure Go to Workloads Secrets in the Administrator perspective of the OpenShift Dedicated web console. From the Create drop-down list, select From YAML . Create a secret that uses the access_key_id and access_key_secret fields to specify your credentials and the bucketnames , endpoint , and region fields to define the object storage location. AWS is used in the following example: Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 Additional resources Loki object storage 10.2.1.4. Workload identity federation Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Prerequisites OpenShift Dedicated 4.14 and later Logging 5.9 and later Procedure If you use the OpenShift Dedicated web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-5.9" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-5.9" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 10.2.1.5. Creating a LokiStack custom resource by using the web console You can create a LokiStack custom resource (CR) by using the OpenShift Dedicated web console. Prerequisites You have administrator permissions. You have access to the OpenShift Dedicated web console. You installed the Loki Operator. Procedure Go to the Operators Installed Operators page. Click the All instances tab. From the Create new drop-down list, select LokiStack . Select YAML view , and then use the following template to create a LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 1 Use the name logging-loki . 2 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 6 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 10.2.1.6. Installing Logging and the Loki Operator using the CLI To install and configure logging on your OpenShift Dedicated cluster, an Operator such as Loki Operator for log storage must be installed first. This can be done from the OpenShift Dedicated CLI. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . Create a Namespace object for Loki Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, you should configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Dedicated metric, which would cause conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Loki Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Dedicated cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a namespace object for the Red Hat OpenShift Logging Operator: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-logging: "true" openshift.io/cluster-monitoring: "true" 2 1 The Red Hat OpenShift Logging Operator is only deployable to the openshift-logging namespace. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 1 You must specify the openshift-logging namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace 1 You must specify the openshift-logging namespace. 2 Specify stable , or stable-5.<y> as the channel. 3 Specify redhat-operators . If your OpenShift Dedicated cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: "2022-06-01" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify the openshift-logging namespace. 3 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. 8 LokiStack defaults to running in multi-tenant mode, which cannot be modified. One tenant is provided for each log type: audit, infrastructure, and application logs. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR object by running the following command: USD oc apply -f <filename>.yaml Create a ClusterLogging CR object: Example ClusterLogging CR object apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: ocpConsole: logsLimit: 15 managementState: Managed 1 Name must be instance . 2 Namespace must be openshift-logging . Apply the ClusterLogging CR object by running the following command: USD oc apply -f <filename>.yaml Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 10.2.1.7. Creating a secret for Loki object storage by using the CLI To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a secret in the directory that contains your certificate and key files by running the following command: USD oc create secret generic -n openshift-logging <your_secret_name> \ --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password> Note Use generic or opaque secrets for best results. Verification Verify that a secret was created by running the following command: USD oc get secrets Additional resources Loki object storage 10.2.1.8. Creating a LokiStack custom resource by using the CLI You can create a LokiStack custom resource (CR) by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the Loki Operator. You installed the OpenShift CLI ( oc ). Procedure Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 1 Use the name logging-loki . 2 Specify the deployment size. In the logging 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . 3 Specify the secret used for your log storage. 4 Specify the corresponding storage type. 5 Optional field, logging 5.9 and later. Supported user configured values are as follows: static is the default authentication mode available for all supported object storage types using credentials stored in a Secret. token for short-lived tokens retrieved from a credential source. In this mode the static configuration does not contain credentials needed for the object storage. Instead, they are generated during runtime using a service, which allows for shorter-lived credentials and much more granular control. This authentication mode is not supported for all object storage types. token-cco is the default value when Loki is running on managed STS mode and using CCO on STS/WIF clusters. 6 Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command. Apply the LokiStack CR by running the following command: Verification Verify the installation by listing the pods in the openshift-logging project by running the following command and observing the output: USD oc get pods -n openshift-logging Confirm that you see several pods for components of the logging, similar to the following list: Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s 10.2.2. Loki object storage The Loki Operator supports AWS S3 , as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation . Azure , GCS , and Swift are also supported. The recommended nomenclature for Loki storage is logging-loki- <your_storage_provider> . The following table shows the type values within the LokiStack custom resource (CR) for each storage provider. For more information, see the section on your storage provider. Table 10.2. Secret type quick reference Storage provider Secret type value AWS s3 Azure azure Google Cloud gcs Minio s3 OpenShift Data Foundation s3 Swift swift 10.2.2.1. AWS storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on AWS. You created an AWS IAM Policy and IAM User . Procedure Create an object storage secret with the name logging-loki-aws by running the following command: USD oc create secret generic logging-loki-aws \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<aws_bucket_endpoint>" \ --from-literal=access_key_id="<aws_access_key_id>" \ --from-literal=access_key_secret="<aws_access_key_secret>" \ --from-literal=region="<aws_region_of_your_bucket>" 10.2.2.1.1. AWS storage for STS enabled clusters If your cluster has STS enabled, the Cloud Credential Operator (CCO) supports short-term authentication using AWS tokens. You can create the Loki object storage secret manually by running the following command: USD oc -n openshift-logging create secret generic "logging-loki-aws" \ --from-literal=bucketnames="<s3_bucket_name>" \ --from-literal=region="<bucket_region>" \ --from-literal=audience="<oidc_audience>" 1 1 Optional annotation, default value is openshift . 10.2.2.2. Azure storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Azure. Procedure Create an object storage secret with the name logging-loki-azure by running the following command: USD oc create secret generic logging-loki-azure \ --from-literal=container="<azure_container_name>" \ --from-literal=environment="<azure_environment>" \ 1 --from-literal=account_name="<azure_account_name>" \ --from-literal=account_key="<azure_account_key>" 1 Supported environment values are AzureGlobal , AzureChinaCloud , AzureGermanCloud , or AzureUSGovernment . 10.2.2.2.1. Azure storage for Microsoft Entra Workload ID enabled clusters If your cluster has Microsoft Entra Workload ID enabled, the Cloud Credential Operator (CCO) supports short-term authentication using Workload ID. You can create the Loki object storage secret manually by running the following command: USD oc -n openshift-logging create secret generic logging-loki-azure \ --from-literal=environment="<azure_environment>" \ --from-literal=account_name="<storage_account_name>" \ --from-literal=container="<container_name>" 10.2.2.3. Google Cloud Platform storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a project on Google Cloud Platform (GCP). You created a bucket in the same project. You created a service account in the same project for GCP authentication. Procedure Copy the service account credentials received from GCP into a file called key.json . Create an object storage secret with the name logging-loki-gcs by running the following command: USD oc create secret generic logging-loki-gcs \ --from-literal=bucketname="<bucket_name>" \ --from-file=key.json="<path/to/key.json>" 10.2.2.4. Minio storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You have Minio deployed on your cluster. You created a bucket on Minio. Procedure Create an object storage secret with the name logging-loki-minio by running the following command: USD oc create secret generic logging-loki-minio \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="<minio_bucket_endpoint>" \ --from-literal=access_key_id="<minio_access_key_id>" \ --from-literal=access_key_secret="<minio_access_key_secret>" 10.2.2.5. OpenShift Data Foundation storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You deployed OpenShift Data Foundation . You configured your OpenShift Data Foundation cluster for object storage . Procedure Create an ObjectBucketClaim custom resource in the openshift-logging namespace: apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io Get bucket properties from the associated ConfigMap object by running the following command: BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}') Get bucket access key from the associated secret by running the following command: ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d) Create an object storage secret with the name logging-loki-odf by running the following command: USD oc create -n openshift-logging secret generic logging-loki-odf \ --from-literal=access_key_id="<access_key_id>" \ --from-literal=access_key_secret="<secret_access_key>" \ --from-literal=bucketnames="<bucket_name>" \ --from-literal=endpoint="https://<bucket_host>:<bucket_port>" 10.2.2.6. Swift storage Prerequisites You installed the Loki Operator. You installed the OpenShift CLI ( oc ). You created a bucket on Swift. Procedure Create an object storage secret with the name logging-loki-swift by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" You can optionally provide project-specific data, region, or both by running the following command: USD oc create secret generic logging-loki-swift \ --from-literal=auth_url="<swift_auth_url>" \ --from-literal=username="<swift_usernameclaim>" \ --from-literal=user_domain_name="<swift_user_domain_name>" \ --from-literal=user_domain_id="<swift_user_domain_id>" \ --from-literal=user_id="<swift_user_id>" \ --from-literal=password="<swift_password>" \ --from-literal=domain_id="<swift_domain_id>" \ --from-literal=domain_name="<swift_domain_name>" \ --from-literal=container_name="<swift_container_name>" \ --from-literal=project_id="<swift_project_id>" \ --from-literal=project_name="<swift_project_name>" \ --from-literal=project_domain_id="<swift_project_domain_id>" \ --from-literal=project_domain_name="<swift_project_domain_name>" \ --from-literal=region="<swift_region>" 10.2.3. Deploying an Elasticsearch log store You can use the OpenShift Elasticsearch Operator to deploy an internal Elasticsearch log store on your OpenShift Dedicated cluster. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 10.2.3.1. Storage considerations for Elasticsearch A persistent volume is required for each Elasticsearch deployment configuration. On OpenShift Dedicated this is achieved using persistent volume claims (PVCs). Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Fluentd ships any logs from systemd journal and /var/log/containers/*.log to Elasticsearch. Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity. By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED. Note These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts. 10.2.3.2. Installing the OpenShift Elasticsearch Operator by using the web console The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging. Prerequisites Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource. The initial set of OpenShift Dedicated nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OpenShift Dedicated cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node. Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments. Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Procedure In the OpenShift Dedicated web console, click Operators OperatorHub . Click OpenShift Elasticsearch Operator from the list of available Operators, and click Install . Ensure that the All namespaces on the cluster is selected under Installation mode . Ensure that openshift-operators-redhat is selected under Installed Namespace . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as OpenShift Dedicated metric, which would cause conflicts. Select Enable operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select stable-5.x as the Update channel . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Verify that the OpenShift Elasticsearch Operator installed by switching to the Operators Installed Operators page. Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded . 10.2.3.3. Installing the OpenShift Elasticsearch Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Elasticsearch Operator. Prerequisites Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. Elasticsearch is a memory-intensive application. By default, OpenShift Dedicated installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OpenShift Dedicated nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes. You have administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Create a Namespace object as a YAML file: apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Dedicated metric, which would cause conflicts. 2 String. You must specify this label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object as a YAML file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {} 1 You must specify the openshift-operators-redhat namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object to subscribe the namespace to the OpenShift Elasticsearch Operator: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator 1 You must specify the openshift-operators-redhat namespace. 2 Specify stable , or stable-x.y as the channel. See the following note. 3 Automatic allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. Manual requires a user with appropriate credentials to approve the Operator update. 4 Specify redhat-operators . If your OpenShift Dedicated cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM). Note Specifying stable installs the current version of the latest stable release. Using stable with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable major and minor release. Specifying stable-x.y installs the current minor version of a specific major release. Using stable-x.y with installPlanApproval: "Automatic" automatically upgrades your Operators to the latest stable minor release within the major release. Apply the subscription by running the following command: USD oc apply -f <filename>.yaml The OpenShift Elasticsearch Operator is installed to the openshift-operators-redhat namespace and copied to each project in the cluster. Verification Run the following command: USD oc get csv -n --all-namespaces Observe the output and confirm that pods for the OpenShift Elasticsearch Operator exist in each namespace Example output NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded ... 10.2.4. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.3. Configuring the LokiStack log store In logging documentation, LokiStack refers to the logging supported combination of Loki and web proxy with OpenShift Dedicated authentication integration. LokiStack's proxy uses OpenShift Dedicated authentication to enforce multi-tenancy. Loki refers to the log store as either the individual component or an external store. 10.3.1. Creating a new group for the cluster-admin user role Important Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120) . For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it. Use the following procedure to create a new group for users with cluster-admin permissions. Procedure Enter the following command to create a new group: USD oc adm groups new cluster-admin Enter the following command to add the desired user to the cluster-admin group: USD oc adm groups add-users cluster-admin <username> Enter the following command to add cluster-admin user role to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admin 10.3.2. LokiStack behavior during cluster restarts In logging version 5.8 and newer versions, when an OpenShift Dedicated cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Dedicated cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. Additional resources Pod disruption budgets Kubernetes documentation 10.3.3. Configuring Loki to tolerate node failure In the logging 5.8 and later versions, the Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Dedicated, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. Additional resources PodAntiAffinity v1 core Kubernetes documentation Assigning Pods to Nodes Kubernetes documentation Placing pods relative to other pods using affinity and anti-affinity rules 10.3.4. Zone aware data replication In the logging 5.8 and later versions, the Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra.small , 1x.small , or 1x.medium, the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 10.3.4.1. Recovering Loki pods from failed zones In OpenShift Dedicated a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Dedicated cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Logging version 5.8 or later. Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: oc delete pvc __<pvc_name>__ -n openshift-logging Then delete the pod(s) by running the following command: oc delete pod __<pod_name>__ -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 10.3.4.1.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. oc patch pvc __<pvc_name>__ -p '{"metadata":{"finalizers":null}}' -n openshift-logging Additional resources Topology spread constraints Kubernetes documentation Kubernetes storage documentation . 10.3.5. Fine grained access for Loki logs In logging 5.8 and later, the Red Hat OpenShift Logging Operator does not grant all users access to logs by default. As an administrator, you must configure your users' access unless the Operator was upgraded and prior configurations are in place. Depending on your configuration and need, you can configure fine grain access to logs using the following: Cluster wide policies Namespace scoped policies Creation of custom admin groups As an administrator, you need to create the role bindings and cluster role bindings appropriate for your deployment. The Red Hat OpenShift Logging Operator provides the following cluster roles: cluster-logging-application-view grants permission to read application logs. cluster-logging-infrastructure-view grants permission to read infrastructure logs. cluster-logging-audit-view grants permission to read audit logs. If you have upgraded from a prior version, an additional cluster role logging-application-logs-reader and associated cluster role binding logging-all-authenticated-application-logs-reader provide backward compatibility, allowing any authenticated user read access in their namespaces. Note Users with access by namespace must provide a namespace when querying application logs. 10.3.5.1. Cluster wide access Cluster role binding resources reference cluster roles, and set permissions cluster wide. Example ClusterRoleBinding kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io 1 Additional ClusterRoles are cluster-logging-infrastructure-view , and cluster-logging-audit-view . 2 Specifies the users or groups this object applies to. 10.3.5.2. Namespaced access RoleBinding resources can be used with ClusterRole objects to define the namespace a user or group has access to logs for. Example RoleBinding kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0 1 Specifies the namespace this RoleBinding applies to. 10.3.5.3. Custom admin group access If you have a large deployment with several users who require broader permissions, you can create a custom group using the adminGroup field. Users who are members of any group specified in the adminGroups field of the LokiStack CR are considered administrators. Administrator users have access to all application logs in all namespaces, if they also get assigned the cluster-logging-application-view role. Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3 1 Custom admin groups are only available in this mode. 2 Entering an empty list [] value for this field disables admin groups. 3 Overrides the default groups ( system:cluster-admins , cluster-admin , dedicated-admin ) 10.3.6. Enabling stream-based retention with Loki Additional resources With Logging version 5.6 and higher, you can configure retention policies based on log streams. Rules for these may be set globally, per tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Although logging version 5.9 and higher supports schema v12, v13 is recommended. To enable stream-based retention, create a LokiStack CR: Example global stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream.spec: limits: Example per-tenant stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. 2 Apply the LokiStack CR: USD oc apply -f <filename>.yaml Note This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. 10.3.7. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 10.3.8. Configuring Loki to tolerate memberlist creation failure In an OpenShift cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack CR to use the podIP in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP","type": "memberlist"}}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 10.3.9. Additional resources Loki components documentation Loki Query Language (LogQL) documentation Grafana Dashboard documentation Loki Object Storage documentation Loki Operator IngestionLimitSpec documentation Loki Storage Schema documentation 10.4. Configuring the Elasticsearch log store You can use Elasticsearch 6 to store and organize log data. You can make modifications to your log store, including: Storage for your Elasticsearch cluster Shard replication across data nodes in the cluster, from full replication to no replication External access to Elasticsearch data 10.4.1. Configuring log storage You can configure which log storage type your logging uses by modifying the ClusterLogging custom resource (CR). Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch. You have created a ClusterLogging CR. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Procedure Modify the ClusterLogging CR logStore spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {} # ... 1 Specify the log store type. This can be either lokistack or elasticsearch . 2 Optional configuration options for the Elasticsearch log store. 3 Specify the redundancy type. This value can be ZeroRedundancy , SingleRedundancy , MultipleRedundancy , or FullRedundancy . 4 Optional configuration options for LokiStack. Example ClusterLogging CR to specify LokiStack as the log store apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki # ... Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 10.4.2. Forwarding audit logs to the log store In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the ClusterLogging custom resource (CR) by default. Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured. If this default configuration meets your needs, you do not need to configure a ClusterLogForwarder CR. If a ClusterLogForwarder CR exists, logs are not forwarded to the internal log store unless a pipeline is defined that contains the default output. Procedure To use the Log Forward API to forward audit logs to the internal Elasticsearch instance: Create or edit a YAML file that defines the ClusterLogForwarder CR object: Create a CR to send all log types to the internal Elasticsearch instance. You can use the following example without making any changes: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default 1 A pipeline defines the type of logs to forward using the specified output. The default output forwards logs to the internal Elasticsearch instance. Note You must specify all three types of logs in the pipeline: application, infrastructure, and audit. If you do not specify a log type, those logs are not stored and will be lost. If you have an existing ClusterLogForwarder CR, add a pipeline to the default output for the audit logs. You do not need to define the default output. For example: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: "elasticsearch" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: "elasticsearch" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: "fluentdForward" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1 1 This pipeline sends the audit logs to the internal Elasticsearch instance in addition to an external instance. Additional resources About log collection and forwarding 10.4.3. Configuring log retention time You can configure a retention policy that specifies how long the default Elasticsearch log store keeps indices for each of the three log sources: infrastructure logs, application logs, and audit logs. To configure the retention policy, you set a maxAge parameter for each log source in the ClusterLogging custom resource (CR). The CR applies these values to the Elasticsearch rollover schedule, which determines when Elasticsearch deletes the rolled-over indices. Elasticsearch rolls over an index, moving the current index and creating a new index, when an index matches any of the following conditions: The index is older than the rollover.maxAge value in the Elasticsearch CR. The index size is greater than 40 GB x the number of primary shards. The index doc count is greater than 40960 KB x the number of primary shards. Elasticsearch deletes the rolled-over indices based on the retention policy you configure. If you do not create a retention policy for any log sources, logs are deleted after seven days by default. Prerequisites The Red Hat OpenShift Logging Operator and the OpenShift Elasticsearch Operator must be installed. Procedure To configure the log retention time: Edit the ClusterLogging CR to add or modify the retentionPolicy parameter: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" ... spec: managementState: "Managed" logStore: type: "elasticsearch" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 ... 1 Specify the time that Elasticsearch should retain each log source. Enter an integer and a time designation: weeks(w), hours(h/H), minutes(m) and seconds(s). For example, 1d for one day. Logs older than the maxAge are deleted. By default, logs are retained for seven days. You can verify the settings in the Elasticsearch custom resource (CR). For example, the Red Hat OpenShift Logging Operator updated the following Elasticsearch CR to configure a retention policy that includes settings to roll over active indices for the infrastructure logs every eight hours and the rolled-over indices are deleted seven days after rollover. OpenShift Dedicated checks every 15 minutes to determine if the indices need to be rolled over. apiVersion: "logging.openshift.io/v1" kind: "Elasticsearch" metadata: name: "elasticsearch" spec: ... indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4 ... 1 For each log source, the retention policy indicates when to delete and roll over logs for that source. 2 When OpenShift Dedicated deletes the rolled-over indices. This setting is the maxAge you set in the ClusterLogging CR. 3 The index age for OpenShift Dedicated to consider when rolling over the indices. This value is determined from the maxAge you set in the ClusterLogging CR. 4 When OpenShift Dedicated checks if the indices should be rolled over. This setting is the default and cannot be changed. Note Modifying the Elasticsearch CR is not supported. All changes to the retention policies must be made in the ClusterLogging CR. The OpenShift Elasticsearch Operator deploys a cron job to roll over indices for each mapping using the defined policy, scheduled using the pollInterval . USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s 10.4.4. Configuring CPU and memory requests for the log store Each component specification allows for adjustments to both the CPU and memory requests. You should not have to manually adjust these values as the OpenShift Elasticsearch Operator sets values sufficient for your environment. Note In large-scale clusters, the default memory limit for the Elasticsearch proxy container might not be sufficient, causing the proxy container to be OOMKilled. If you experience this issue, increase the memory requests and limits for the Elasticsearch proxy. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. For production use, you should have no less than the default 16Gi allocated to each pod. Preferably you should allocate as much as possible, up to 64Gi per pod. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: 1 resources: limits: 2 memory: "32Gi" requests: 3 cpu: "1" memory: "16Gi" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi 1 Specify the CPU and memory requests for Elasticsearch as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that should be sufficient for most deployments. The default values are 16Gi for the memory request and 1 for the CPU request. 2 The maximum amount of resources a pod can use. 3 The minimum resources required to schedule a pod. 4 Specify the CPU and memory requests for the Elasticsearch proxy as needed. If you leave these values blank, the OpenShift Elasticsearch Operator sets default values that are sufficient for most deployments. The default values are 256Mi for the memory request and 100m for the CPU request. When adjusting the amount of Elasticsearch memory, the same value should be used for both requests and limits . For example: resources: limits: 1 memory: "32Gi" requests: 2 cpu: "8" memory: "32Gi" 1 The maximum amount of the resource. 2 The minimum amount required. Kubernetes generally adheres the node configuration and does not allow Elasticsearch to use the specified limits. Setting the same value for the requests and limits ensures that Elasticsearch can use the memory you want, assuming the node has the memory available. 10.4.5. Configuring replication policy for the log store You can define how Elasticsearch shards are replicated across data nodes in the cluster. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit clusterlogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" .... spec: logStore: type: "elasticsearch" elasticsearch: redundancyPolicy: "SingleRedundancy" 1 1 Specify a redundancy policy for the shards. The change is applied upon saving the changes. FullRedundancy . Elasticsearch fully replicates the primary shards for each index to every data node. This provides the highest safety, but at the cost of the highest amount of disk required and the poorest performance. MultipleRedundancy . Elasticsearch fully replicates the primary shards for each index to half of the data nodes. This provides a good tradeoff between safety and performance. SingleRedundancy . Elasticsearch makes one copy of the primary shards for each index. Logs are always available and recoverable as long as at least two data nodes exist. Better performance than MultipleRedundancy, when using 5 or more nodes. You cannot apply this policy on deployments of single Elasticsearch node. ZeroRedundancy . Elasticsearch does not make copies of the primary shards. Logs might be unavailable or lost in the event a node is down or fails. Use this mode when you are more concerned with performance than safety, or have implemented your own disk/PVC backup/restore strategy. Note The number of primary shards for the index templates is equal to the number of Elasticsearch data nodes. 10.4.6. Scaling down Elasticsearch pods Reducing the number of Elasticsearch pods in your cluster can result in data loss or Elasticsearch performance degradation. If you scale down, you should scale down by one pod at a time and allow the cluster to re-balance the shards and replicas. After the Elasticsearch health status returns to green , you can scale down by another pod. Note If your Elasticsearch cluster is set to ZeroRedundancy , you should not scale down your Elasticsearch pods. 10.4.7. Configuring persistent storage for the log store Elasticsearch requires persistent storage. The faster the storage, the faster the Elasticsearch performance. Warning Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify that each data node in the cluster is bound to a Persistent Volume Claim. apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" # ... spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: storageClassName: "gp2" size: "200G" This example specifies each data node in the cluster is bound to a Persistent Volume Claim that requests "200G" of AWS General Purpose SSD (gp2) storage. Note If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes. 10.4.8. Configuring the log store for emptyDir storage You can use emptyDir with your log store, which creates an ephemeral deployment in which all of a pod's data is lost upon restart. Note When using emptyDir, if log storage is restarted or redeployed, you will lose data. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Edit the ClusterLogging CR to specify emptyDir: spec: logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 storage: {} 10.4.9. Performing an Elasticsearch rolling cluster restart Perform a rolling restart when you change the elasticsearch config map or any of the elasticsearch-* deployment configurations. Also, a rolling restart is recommended if the nodes on which an Elasticsearch pod runs requires a reboot. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure To perform a rolling cluster restart: Change to the openshift-logging project: Get the names of the Elasticsearch pods: Scale down the collector pods so they stop sending new logs to Elasticsearch: USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "false"}}}}}' Perform a shard synced flush using the OpenShift Dedicated es_util tool to ensure there are no pending operations waiting to be written to disk prior to shutting down: USD oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query="_flush/synced" -XPOST For example: Example output Prevent shard balancing when purposely bringing down nodes using the OpenShift Dedicated es_util tool: For example: Example output {"acknowledged":true,"persistent":{"cluster":{"routing":{"allocation":{"enable":"primaries"}}}},"transient": After the command is complete, for each deployment you have for an ES cluster: By default, the OpenShift Dedicated Elasticsearch cluster blocks rollouts to their nodes. Use the following command to allow rollouts and allow the pod to pick up the changes: For example: Example output A new pod is deployed. After the pod has a ready container, you can move on to the deployment. Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h After the deployments are complete, reset the pod to disallow rollouts: For example: Example output Check that the Elasticsearch cluster is in a green or yellow state: Note If you performed a rollout on the Elasticsearch pod you used in the commands, the pod no longer exists and you need a new pod name here. For example: 1 Make sure this parameter value is green or yellow before proceeding. If you changed the Elasticsearch configuration map, repeat these steps for each Elasticsearch pod. After all the deployments for the cluster have been rolled out, re-enable shard balancing: For example: Example output { "acknowledged" : true, "persistent" : { }, "transient" : { "cluster" : { "routing" : { "allocation" : { "enable" : "all" } } } } } Scale up the collector pods so they send new logs to Elasticsearch. USD oc -n openshift-logging patch daemonset/collector -p '{"spec":{"template":{"spec":{"nodeSelector":{"logging-infra-collector": "true"}}}}}' 10.4.10. Exposing the log store service as a route By default, the log store that is deployed with logging is not accessible from outside the logging cluster. You can enable a route with re-encryption termination for external access to the log store service for those tools that access its data. Externally, you can access the log store by creating a reencrypt route, your OpenShift Dedicated token and the installed log store CA certificate. Then, access a node that hosts the log store service with a cURL request that contains: The Authorization: Bearer USD{token} The Elasticsearch reencrypt route and an Elasticsearch API request . Internally, you can access the log store service using the log store cluster IP, which you can get by using either of the following commands: USD oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging Example output 172.30.183.229 USD oc get service elasticsearch -n openshift-logging Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h You can check the cluster IP address with a command similar to the following: USD oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://172.30.183.229:9200/_cat/health" Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108 Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. You must have access to the project to be able to access to the logs. Procedure To expose the log store externally: Change to the openshift-logging project: USD oc project openshift-logging Extract the CA certificate from the log store and write to the admin-ca file: USD oc extract secret/elasticsearch --to=. --keys=admin-ca Example output admin-ca Create the route for the log store service as a YAML file: Create a YAML file with the following: apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1 1 Add the log store CA certifcate or use the command in the step. You do not have to set the spec.tls.key , spec.tls.certificate , and spec.tls.caCertificate parameters required by some reencrypt routes. Run the following command to add the log store CA certificate to the route YAML you created in the step: USD cat ./admin-ca | sed -e "s/^/ /" >> <file-name>.yaml Create the route: USD oc create -f <file-name>.yaml Example output route.route.openshift.io/elasticsearch created Check that the Elasticsearch service is exposed: Get the token of this service account to be used in the request: USD token=USD(oc whoami -t) Set the elasticsearch route you created as an environment variable. USD routeES=`oc get route elasticsearch -o jsonpath={.spec.host}` To verify the route was successfully created, run the following command that accesses Elasticsearch through the exposed route: curl -tlsv1.2 --insecure -H "Authorization: Bearer USD{token}" "https://USD{routeES}" The response appears similar to the following: Example output { "name" : "elasticsearch-cdm-i40ktba0-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "0eY-tJzcR3KOdpgeMJo-MQ", "version" : { "number" : "6.8.1", "build_flavor" : "oss", "build_type" : "zip", "build_hash" : "Unknown", "build_date" : "Unknown", "build_snapshot" : true, "lucene_version" : "7.7.0", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "<tagline>" : "<for search>" } 10.4.11. Removing unused components if you do not use the default Elasticsearch log store As an administrator, in the rare case that you forward logs to a third-party log store and do not use the default Elasticsearch log store, you can remove several unused components from your logging cluster. In other words, if you do not use the default Elasticsearch log store, you can remove the internal Elasticsearch logStore and Kibana visualization components from the ClusterLogging custom resource (CR). Removing these components is optional but saves resources. Prerequisites Verify that your log forwarder does not send log data to the default internal Elasticsearch cluster. Inspect the ClusterLogForwarder CR YAML file that you used to configure log forwarding. Verify that it does not have an outputRefs element that specifies default . For example: outputRefs: - default Warning Suppose the ClusterLogForwarder CR forwards log data to the internal Elasticsearch cluster, and you remove the logStore component from the ClusterLogging CR. In that case, the internal Elasticsearch cluster will not be present to store the log data. This absence can cause data loss. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance If they are present, remove the logStore and visualization stanzas from the ClusterLogging CR. Preserve the collection stanza of the ClusterLogging CR. The result should look similar to the following example: apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" spec: managementState: "Managed" collection: type: "fluentd" fluentd: {} Verify that the collector pods are redeployed: USD oc get pods -l component=collector -n openshift-logging
[ "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed", "apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 namespace: openshift-logging stringData: access_key_id: AKIAIOSFODNN7EXAMPLE access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-5.9\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc apply -f <filename>.yaml", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v12 effectiveDate: \"2022-06-01\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: ocpConsole: logsLimit: 15 managementState: Managed", "oc apply -f <filename>.yaml", "oc get pods -n openshift-logging", "oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m", "oc create secret generic -n openshift-logging <your_secret_name> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>", "oc get secrets", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging spec: size: 1x.small 2 storage: schemas: - effectiveDate: '2023-10-15' version: v13 secret: name: logging-loki-s3 3 type: s3 4 credentialMode: 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging", "oc get pods -n openshift-logging", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m collector-6cglq 2/2 Running 0 45s collector-8r664 2/2 Running 0 45s collector-8z7px 2/2 Running 0 45s collector-pdxl9 2/2 Running 0 45s collector-tc9dx 2/2 Running 0 45s collector-xkd76 2/2 Running 0 45s logging-loki-compactor-0 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s logging-loki-index-gateway-0 1/1 Running 0 8m2s logging-loki-index-gateway-1 1/1 Running 0 7m29s logging-loki-ingester-0 1/1 Running 0 8m2s logging-loki-ingester-1 1/1 Running 0 6m46s logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s", "oc create secret generic logging-loki-aws --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\"", "oc -n openshift-logging create secret generic \"logging-loki-aws\" --from-literal=bucketnames=\"<s3_bucket_name>\" --from-literal=region=\"<bucket_region>\" --from-literal=audience=\"<oidc_audience>\" 1", "oc create secret generic logging-loki-azure --from-literal=container=\"<azure_container_name>\" --from-literal=environment=\"<azure_environment>\" \\ 1 --from-literal=account_name=\"<azure_account_name>\" --from-literal=account_key=\"<azure_account_key>\"", "oc -n openshift-logging create secret generic logging-loki-azure --from-literal=environment=\"<azure_environment>\" --from-literal=account_name=\"<storage_account_name>\" --from-literal=container=\"<container_name>\"", "oc create secret generic logging-loki-gcs --from-literal=bucketname=\"<bucket_name>\" --from-file=key.json=\"<path/to/key.json>\"", "oc create secret generic logging-loki-minio --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<minio_bucket_endpoint>\" --from-literal=access_key_id=\"<minio_access_key_id>\" --from-literal=access_key_secret=\"<minio_access_key_secret>\"", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: loki-bucket-odf namespace: openshift-logging spec: generateBucketName: loki-bucket-odf storageClassName: openshift-storage.noobaa.io", "BUCKET_HOST=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}') BUCKET_NAME=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}') BUCKET_PORT=USD(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')", "ACCESS_KEY_ID=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d) SECRET_ACCESS_KEY=USD(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)", "oc create -n openshift-logging secret generic logging-loki-odf --from-literal=access_key_id=\"<access_key_id>\" --from-literal=access_key_secret=\"<secret_access_key>\" --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"https://<bucket_host>:<bucket_port>\"", "oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\"", "oc create secret generic logging-loki-swift --from-literal=auth_url=\"<swift_auth_url>\" --from-literal=username=\"<swift_usernameclaim>\" --from-literal=user_domain_name=\"<swift_user_domain_name>\" --from-literal=user_domain_id=\"<swift_user_domain_id>\" --from-literal=user_id=\"<swift_user_id>\" --from-literal=password=\"<swift_password>\" --from-literal=domain_id=\"<swift_domain_id>\" --from-literal=domain_name=\"<swift_domain_name>\" --from-literal=container_name=\"<swift_container_name>\" --from-literal=project_id=\"<swift_project_id>\" --from-literal=project_name=\"<swift_project_name>\" --from-literal=project_domain_id=\"<swift_project_domain_id>\" --from-literal=project_domain_name=\"<swift_project_domain_name>\" --from-literal=region=\"<swift_region>\"", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}", "oc apply -f <filename>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elasticsearch-operator namespace: openshift-operators-redhat 1 spec: channel: stable-x.y 2 installPlanApproval: Automatic 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace name: elasticsearch-operator", "oc apply -f <filename>.yaml", "oc get csv -n --all-namespaces", "NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki", "oc apply -f <filename>.yaml", "oc adm groups new cluster-admin", "oc adm groups add-users cluster-admin <username>", "oc adm policy add-cluster-role-to-group cluster-admin cluster-admin", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4", "get pods --field-selector status.phase==Pending -n openshift-logging", "NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m", "get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r", "storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1", "delete pvc __<pvc_name>__ -n openshift-logging", "delete pod __<pod_name>__ -n openshift-logging", "patch pvc __<pvc_name>__ -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: logging-all-application-logs-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view 1 subjects: 2 - kind: Group name: system:authenticated apiGroup: rbac.authorization.k8s.io", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: allow-read-logs namespace: log-test-0 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-logging-application-view subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: testuser-0", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: tenants: mode: openshift-logging 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v11 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging", "oc apply -f <filename>.yaml", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded", "2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true", "2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"", "level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2", "oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\",\"type\": \"memberlist\"}}}}'", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: logStore: type: <log_store_type> 1 elasticsearch: 2 nodeCount: <integer> resources: {} storage: {} redundancyPolicy: <redundancy_type> 3 lokistack: 4 name: {}", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: managementState: Managed logStore: type: lokistack lokistack: name: logging-loki", "oc apply -f <filename>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3", "apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s", "oc edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi", "resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"", "oc edit clusterlogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "oc project openshift-logging", "oc get pods -l component=elasticsearch", "oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST", "oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST", "{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'", "{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":", "oc rollout resume deployment/<deployment-name>", "oc rollout resume deployment/elasticsearch-cdm-0-1", "deployment.extensions/elasticsearch-cdm-0-1 resumed", "oc get pods -l component=elasticsearch-", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h", "oc rollout pause deployment/<deployment-name>", "oc rollout pause deployment/elasticsearch-cdm-0-1", "deployment.extensions/elasticsearch-cdm-0-1 paused", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'", "{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }", "oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'", "oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging", "172.30.183.229", "oc get service elasticsearch -n openshift-logging", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h", "oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108", "oc project openshift-logging", "oc extract secret/elasticsearch --to=. --keys=admin-ca", "admin-ca", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1", "cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml", "oc create -f <file-name>.yaml", "route.route.openshift.io/elasticsearch created", "token=USD(oc whoami -t)", "routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`", "curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"", "{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }", "outputRefs: - default", "oc edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: type: \"fluentd\" fluentd: {}", "oc get pods -l component=collector -n openshift-logging" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/log-storage
Edge computing
Edge computing OpenShift Container Platform 4.15 Configure and deploy OpenShift Container Platform clusters at the network edge Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/edge_computing/index
Chapter 5. Using the web console for managing firewall
Chapter 5. Using the web console for managing firewall A firewall is a way to protect machines from any unwanted traffic from outside. It enables users to control incoming network traffic on host machines by defining a set of firewall rules. These rules are used to sort the incoming traffic and either block it or allow through. 5.1. Prerequisites The RHEL 7 web console configures the firewalld service. For details about the firewalld service, see firewalld . 5.2. Using the web console to run the firewall This section describes where and how to run the RHEL 7 system firewall in the web console. Note The web console configures the firewalld service. Procedure Log in to the web console. For details, see Logging in to the web console . Open the Networking section. In the Firewall section, click ON to run the firewall. If you do not see the Firewall box, log in to the web console with the administration privileges. At this stage, your firewall is running. To configure firewall rules, see Adding rules in the web console using the web console . 5.3. Using the web console to stop the firewall This section describes where and how to stop the RHEL 7 system firewall in the web console. Note The web console configures the firewalld service. Procedure Log in to the web console. For details, see Logging in to the web console . Open the Networking section. In the Firewall section, click OFF to stop it. If you do not see the Firewall box, log in to the web console with the administration privileges. At this stage, the firewall has been stopped and does not secure your system. 5.4. firewalld firewalld is a firewall service daemon that provides a dynamic customizable host-based firewall with a D-Bus interface. Being dynamic, it enables creating, changing, and deleting the rules without the necessity to restart the firewall daemon each time the rules are changed. firewalld uses the concepts of zones and services , that simplify the traffic management. Zones are predefined sets of rules. Network interfaces and sources can be assigned to a zone. The traffic allowed depends on the network your computer is connected to and the security level this network is assigned. Firewall services are predefined rules that cover all necessary settings to allow incoming traffic for a specific service and they apply within a zone. Services use one or more ports or addresses for network communication. Firewalls filter communication based on ports. To allow network traffic for a service, its ports must be open . firewalld blocks all traffic on ports that are not explicitly set as open. Some zones, such as trusted , allow all traffic by default. Additional resources firewalld(1) man page 5.5. Zones firewalld can be used to separate networks into different zones according to the level of trust that the user has decided to place on the interfaces and traffic within that network. A connection can only be part of one zone, but a zone can be used for many network connections. NetworkManager notifies firewalld of the zone of an interface. You can assign zones to interfaces with: NetworkManager firewall-config tool firewall-cmd command-line tool The RHEL web console The latter three can only edit the appropriate NetworkManager configuration files. If you change the zone of the interface using the web console, firewall-cmd or firewall-config , the request is forwarded to NetworkManager and is not handled by firewalld . The predefined zones are stored in the /usr/lib/firewalld/zones/ directory and can be instantly applied to any available network interface. These files are copied to the /etc/firewalld/zones/ directory only after they are modified. The default settings of the predefined zones are as follows: block Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6 . Only network connections initiated from within the system are possible. dmz For computers in your demilitarized zone that are publicly-accessible with limited access to your internal network. Only selected incoming connections are accepted. drop Any incoming network packets are dropped without any notification. Only outgoing network connections are possible. external For use on external networks with masquerading enabled, especially for routers. You do not trust the other computers on the network to not harm your computer. Only selected incoming connections are accepted. home For use at home when you mostly trust the other computers on the network. Only selected incoming connections are accepted. internal For use on internal networks when you mostly trust the other computers on the network. Only selected incoming connections are accepted. public For use in public areas where you do not trust other computers on the network. Only selected incoming connections are accepted. trusted All network connections are accepted. work For use at work where you mostly trust the other computers on the network. Only selected incoming connections are accepted. One of these zones is set as the default zone. When interface connections are added to NetworkManager , they are assigned to the default zone. On installation, the default zone in firewalld is set to be the public zone. The default zone can be changed. Note The network zone names have been chosen to be self-explanatory and to allow users to quickly make a reasonable decision. To avoid any security problems, review the default zone configuration and disable any unnecessary services according to your needs and risk assessments. Additional resources ` firewalld.zone(5) man page 5.6. Zones in the web console Important Firewall zones are new in RHEL 7.7.0. The Red Hat Enterprise Linux web console implements major features of the firewalld service and enables you to: Add predefined firewall zones to a particular interface or range of IP addresses Configure zones with selecting services into the list of enabled services Disable a service by removing this service from the list of enabled service Remove a zone from an interface 5.7. Enabling zones using the web console The web console enables you to apply predefined and existing firewall zones on a particular interface or a range of IP addresses. This section describes how to enable a zone on an interface. Prerequisites The web console has been installed. For details, see Installing the web console . The firewall must be enabled. For details, see Running the firewall in the web console . Procedure Log in to the RHEL web console with administration privileges. For details, see Logging in to the web console . Click Networking . Click on the Firewall box title. If you do not see the Firewall box, log in to the web console with the administrator privileges. In the Firewall section, click Add Services . Click on the Add Zone button. In the Add Zone dialog box, select a zone from the Trust level scale. You can see here all zones predefined in the firewalld service. In the Interfaces part, select an interface or interfaces on which the selected zone is applied. In the Allowed Addresses part, you can select whether the zone is applied on: the whole subnet or a range of IP addresses in the following format: 192.168.1.0 192.168.1.0/24 192.168.1.0/24, 192.168.1.0 Click on the Add zone button. Verify the configuration in Active zones . 5.8. Enabling services on the firewall using the web console By default, services are added to the default firewall zone. If you use more firewall zones on more network interfaces, you must select a zone first and then add the service with port. The web console displays predefined firewalld services and you can add them to active firewall zones. Important The web console configures the firewalld service. The web console does not allow generic firewalld rules which are not listed in the web console. Prerequisites The web console has been installed. For details, see Installing the web console . The firewall must be enabled. For details, see Running the firewall in the web console . Procedure Log in to the RHEL web console with administrator privileges. For details, see Logging in to the web console . Click Networking . Click on the Firewall box title. If you do not see the Firewall box, log in to the web console with the administrator privileges. In the Firewall section, click Add Services . In the Add Services dialog box, select a zone for which you want to add the service. The Add Services dialog box includes a list of active firewall zones only if the system includes multiple active zones. If the system uses just one (the default) zone, the dialog does not include zone settings. In the Add Services dialog box, find the service you want to enable on the firewall. Enable desired services. Click Add Services . At this point, the web console displays the service in the list of Allowed Services . 5.9. Configuring custom ports using the web console The web console allows you to add: Services listening on standard ports: Section 5.8, "Enabling services on the firewall using the web console" Services listening on custom ports. This section describes how to add services with custom ports configured. Prerequisites The web console has been installed. For details, see Installing the web console . The firewall must be enabled. For details, see Running the firewall in the web console . Procedure Log in to the RHEL web console with administrator privileges. For details, see Logging in to the web console . Click Networking . Click on the Firewall box title. If you do not see the Firewall box, log in to the web console with the administration privileges. In the Firewall section, click Add Services . In the Add Services dialog box, select a zone for which you want to add the service. The Add Services dialog box includes a list of active firewall zones only if the system includes multiple active zones. If the system uses just one (the default) zone, the dialog does not include zone settings. In the Add Ports dialog box, click on the Custom Ports radio button. In the TCP and UDP fields, add ports according to examples. You can add ports in the following formats: Port numbers such as 22 Range of port numbers such as 5900-5910 Aliases such as nfs, rsync Note You can add multiple values into each field. Values must be separated with the comma and without the space, for example: 8080,8081,http After adding the port number in the TCP and/or UDP fields, verify the service name in the Name field. The Name field displays the name of the service for which is this port reserved. You can rewrite the name if you are sure that this port is free to use and no server needs to communicate on this port. In the Name field, add a name for the service including defined ports. Click on the Add Ports button. To verify the settings, go to the Firewall page and find the service in the list of Allowed Services . 5.10. Disabling zones using the web console This section describes how to disable a firewall zone in your firewall configuration using the web console. Prerequisites The web console has been installed. For details, see Installing the web console . Procedure Log in to the RHEL web console with administrator privileges. For details, see Logging in to the web console . Click Networking . Click on the Firewall box title. If you do not see the Firewall box, log in to the web console with the administrator privileges. On the Active zones table, click on the Delete icon at the zone you want to remove. The zone is now disabled and the interface does not include opened services and ports which were configured in the zone.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/using-the-web-console-for-managing-firewall_system-management-using-the-rhel-7-web-console
Chapter 1. Introduction to the Migration Toolkit for Applications
Chapter 1. Introduction to the Migration Toolkit for Applications Migration Toolkit for Applications (MTA) is a set of tools that you can use to accelerate large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift. MTA looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application. MTA also generates a detailed report that evaluates a migration or modernization path. By using this report, you can estimate the effort required for large-scale projects and reduce the work involved. By using the MTA, you can perform the following tasks: Use the MTA extensive default questionnaire to assess your applications, or create your own custom questionnaire to estimate the difficulty, time, and other resources needed to prepare an application for containerization. You can use the results of an assessment for discussions between stakeholders to determine which applications are suitable for containerization. Analyze applications by applying one or more sets of rules to each application. You can use these rules to determine which specific lines of the application must be modified before the application can be modernized. Examine application artifacts, including project source directories and application archives, and produce an HTML report that highlights areas that require changes. 1.1. The MTA features Migration Toolkit for Applications (MTA) provides the following features to simplify upgrades with more migration paths: New application inventory and assessment modules to assist organizations in managing, classifying, and tagging their applications while assessing application suitability for deployment in containers, including flagging potential risks for migration strategies. Full integration with source code and binary repositories to automate the retrieval of applications for analysis along with proxy integration, including HTTP and HTTPS proxy configuration managed in the user interface. Improved analysis capabilities with new analysis modes, including source and dependency modes that parse repositories to gather dependencies and add these dependencies to the overall scope of the analysis. You can also use a simplified user experience to configure the analysis scope, including open source libraries. Enhanced Role-Based Access Control (RBAC) powered by Red Hat Single Sign-On to define new differentiated personas (administrator, architect, and migrator) with different permissions to suit the needs of each user, including credentials management for multiple credential types. Administration perspective to provide tool-wide configuration management for administrators. Support for Red Hat OpenShift on AWS (ROSA) is introduced in MTA 7.0.0. Support added for Azure Red Hat OpenShift (ARO) is introduced in MTA 7.0.0. Multi language support is introduced in 7.1.0. In Migration Toolkit for Applications (MTA) 7.1.0, you can use MTA to analyze applications written in languages other than Java. ( Developer Preview ) 1.2. The MTA rules The Migration Toolkit for Applications (MTA) contains rule-based migration tools (analyzers) that you can use to analyze the application user interfaces (APIs), technologies, and architectures used by the applications you plan to migrate. MTA analyzer rules use the following rule pattern: You can use the MTA rules internally to perform the following tasks: Extract files from archives. Decompile files. Scan and classify file types. Analyze XML and other file content. Analyze the application code. Build the reports. MTA builds a data model based on the rule execution results and stores component data and relationships in a graph database. This database can then be queried and updated as required by the migration rules and for reporting purposes. Note You can create your own custom analyzer rules. You can use custom rules to identify the use of custom libraries or other components that might not be covered by the provided standard migration rules. For instructions on how to write custom rules, see [ Rule Development Guide ].
[ "when(condition) message(message) tag(tags)" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/introduction_to_the_migration_toolkit_for_applications/mta-about-the-intro-to-mta-guide_getting-started-guide
Chapter 1. Overview of Cryostat
Chapter 1. Overview of Cryostat Cryostat is a container-native Java application based on JDK Flight Recorder (JFR) that you can use to monitor Java Virtual Machine (JVM) performance for containerized workloads that run on a Red Hat OpenShift cluster. You can deploy Cryostat in a container in a Red Hat OpenShift project that hosts your containerized Java applications. You can create JVM targets that correspond to the JVM instances that you use to run your containerized workload. You can connect Cryostat to the JVM targets to record and analyze data about heap and non-heap memory usage, thread count, garbage collection, and other performance metrics for each JVM target. You can use the tools that are included with Cryostat to monitor the performance of your JVMs in real time, capture JDK Flight Recorder (JFR) recordings and snapshots, generate Automated Analysis reports, and visualize your recorded performance data by using a Grafana dashboard. The Cryostat web console and HTTP API provides a way to analyze your JVM performance data inside the container without having to rely on an external monitoring application. However, you can also export your recordings from Cryostat into an external instance of JDK Mission Control (JMC) when you need to perform a deeper analysis of your data outside of a cluster environment. Cryostat supports role-based access control (RBAC) as a standard feature of OpenShift Container Platform. You can configure different levels of authorization for each user role to ensure the privacy and integrity of your Flight Recording data. You can install Cryostat inside a Red Hat OpenShift project by using Operator Lifecycle Manager (OLM). You can also download the latest Cryostat component images from the Red Hat Ecosystem Catalog. The following container images exist for Cryostat 2.4 on the Red Hat Ecosystem Catalog: Cryostat Red Hat build of Cryostat Operator Red Hat build of Cryostat Operator bundle Cryostat reports Cryostat Grafana dashboard JFR data source Additional resources Operator Lifecycle Manager (OLM) (OpenShift Container Platform) Container images (Red Hat Ecosystem Catalog)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/installing_cryostat/con-overview-of-cryostat_cryostat
About
About Red Hat Advanced Cluster Management for Kubernetes 2.12 About 2.12
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/about/index
Chapter 6. Notable Bug Fixes
Chapter 6. Notable Bug Fixes This chapter describes bugs fixed in Red Hat Enterprise Linux 7.9 that have a significant impact on users. 6.1. Authentication and Interoperability A deadlock no longer occurs when using SASL binds to Directory Server Previously, a SASL bind to Directory Server could attempt using callbacks that were modified during the connection process. Consequently, a deadlock occurred, and Directory Server could terminated unexpectedly. With this update, the server uses a connection lock that prevents modifying IO layers and callbacks while they are in use. As a result, the deadlock no longer occurs when using SASL binds. ( BZ#1801327 ) The 389-ds-base package now sets the required permissions on directories owned by the Directory Server user If directories in the file system owned by the Directory Server user do not have the correct permissions, Directory Server utilities adjust them accordingly. However, if these permissions were different to the ones that were set during the RPM installation, verifying the RPM using the rpm -V 389-ds-base command failed. This update fixes the permissions in the RPM. As a consequence, verifying the 389-ds-base package no longer complains about incorrect permissions. ( BZ#1700987 ) A memory leak has been fixed in Directory Server when using ip binding rules in an ACI with IPv6 The Access Control Instruction (ACI) context in Directory Server is attached to a connection and contains a structure for both the IPv4 and IPv6 protocol. Previously, when a client closed a connection, Directory Server removed the only IPv4 structure and the context. As a consequence, if an administrator configured an ACI with ip binding rule, Directory Server leaked memory of the IPv6 structure. With this update, the server frees both the IPv4 and IPv6 structures at the end of a connection. As a result, Directory Server no longer leaks memory in the mentioned scenario. ( BZ#1796558 ) Directory Server no longer leaks memory when using ACIs with an ip bind rule When a Directory Server Access Control Instruction (ACI) contains an ip bind rule, the server stores the value of the ip keyword as a reference while evaluating the ACI. Previously, when the evaluations were completed Directory Server did not free the ip value. As a consequence, the server leaked around 100 bytes of memory each time the server evaluated an ACI with an ip bind rule. With this update, Directory Server keeps track of the ip value in the per-connection structure and frees the structure when the connection is closed. As a consequence, Directory Server no longer leaks memory in the mentioned scenario. ( BZ#1769418 ) Directory Server no longer rejects wildcards in the rootdn-allow-ip and rootdn-deny-ip parameters Previously, when an administrator tried to set a wildcard in the rootdn-allow-ip or rootdn-deny-ip parameters in the cn=RootDN Access Control Plugin,cn=plugins,cn=config entry, Directory Server rejected the value. With this update, you can use wildcards when specifying allowed or denied IP addresses in the mentioned parameters. ( BZ#1807537 ) Directory Server rejects update operations if retrieving the system time fails or the time difference is too large Previously, when calling the system time() function failed or the function returned an unexpected value, Change Sequence Numbers (CSN) in Directory Server could become corrupted. As a consequence, the administrator had to re-initialize all replicas in the environment. With this update, Directory Server rejects the update operation if the time() function failed, and Directory Server no longer generates corrupt CSNs in the mentioned scenario. Note that, if the time difference is greater than one day, the server logs a INFO - csngen_new_csn - Detected large jump in CSN time message in the /var/log/dirsrv/slapd-<instance_name>/error file. However, Directory Server still creates the CSN and does not reject the update operation. ( BZ#1837105 ) Directory Server no longer hangs while updating the schema Previously, during a mixed load of search and modify operations, the update of the Directory Server schema blocked all search and modify operations, and the server appeared to hang. This update adjusts the mutex locking during schema updates. As a result, the server does not hang while updating the schema. ( BZ#1824930 ) Directory Server no longer leaks memory when using indirect COS definitions Previously, after processing an indirect Class Of Service (COS) definition, Directory Server leaked memory for each search operation that used an indirect COS definition. With this update, Directory Server frees all internal COS structures associated with the database entry after it has been processed. As a result, the server no longer leaks memory when using indirect COS definitions. ( BZ#1827284 ) Password expiration notifications sent to AD clients using SSSD Previously, Active Directory clients (non-IdM) using SSSD were not sent password expiration notices because of a recent change in the SSSD interface for acquiring Kerberos credentials. The Kerberos interface has been updated and expiration notices are now sent correctly. ( BZ#1733289 ) KDCs now correctly enforce password lifetime policy from LDAP backends Previously, non-IPA Kerberos Distribution Centers (KDCs) did not ensure maximum password lifetimes because the Kerberos LDAP backend incorrectly enforced password policies. With this update, the Kerberos LDAP backend has been fixed, and password lifetimes behave as expected. ( BZ#1782492 ) The pkidaemon tool now reports the correct status of PKI instances when nuxwdog is enabled Previously, the pkidaemon status command would not report the correct status for PKI server instances that have the nuxwdog watchdog enabled. With this update, pkidaemon detects whether nuxwdog is enabled and reports the correct status of the PKI server. ( BZ#1487418 ) 6.2. Compiler and Tools The strptime() method of the Time::Piece Perl module now correctly parses Julian dates The Time::Piece Perl module did not correctly parse a day of the year ( %j ) using the strptime() method. Consequently, Julian dates were parsed incorrectly. This bug has been fixed, and the strptime() method provided by the Time::Piece module now handles Julian dates properly. ( BZ#1751381 ) Documentation files from perl-devel no longer have a write permission for a group Previously, certain documentation files from the perl-devel package had a write permission set for a group. Consequently, users in the root group could write into these files, which represented a security risk. With this update, the write bit for a group has been removed for the affected files. As a result, no documentation file from perl-devel has a write permission set for a group. ( BZ#1806523 ) 6.3. Kernel Resuming from hibernation now works on the megaraid_sas driver Previously, when the megaraid_sas driver resumed from hibernation, the Message Signaled Interrupts (MSIx) allocation did not work correctly. As a consequence, resuming from hibernation failed, and restarting the system was required. This bug has been fixed, and resuming from hibernation now works as expected. (BZ#1807077) Disabling logging in the nf-logger framework has been fixed Previously, when an admin used the sysctl or echo commands to turn off an assigned netfilter logger, a NUL -character was not added to the end of the NONE string. Consequently, the strcmp() function failed with a No such file or directory error. This update fixes the problem. As a result, commands, such as sysctl net.netfilter.nf_log.2=NONE work as expected and turn off logging. (BZ#1770232) XFS now mounts correctly even if the storage device reported invalid geometry at file system creation In RHEL 7.8, an XFS file system failed to mount with the error SB stripe unit sanity check failed if it was created on a block device that reported invalid stripe geometry to the mkfs.xfs tool. With this update, XFS now mounts the file system even if it was created based on invalid stripe geometry. For details, see the following solution article: https://access.redhat.com/solutions/5075561 . (BZ#1836292) 6.4. Networking The same zone file can now be included in multiple views or zones in BIND BIND 9.11 introduced an additional check to ensure that no daemon writable zone file is used multiple times, which would result in creating errors in zone journal serialization. Consequently, configuration accepted by BIND 9.9 was no longer accepted by this daemon. With this update, the fatal error message in configuration file check is replaced by a warning, and as a result, the same zone file can now be included in multiple views or zones. Note that using an in-view clause is recommended as a better solution. ( BZ#1744081 ) A configuration parameter has been added to firewalld to disable zone drifting Previously, the firewalld service contained an undocumented behavior known as "zone drifting". RHEL 7.8 removed this behavior because it could have a negative security impact. As a consequence, on hosts that used this behavior to configure a catch-all or fallback zone, firewalld denied connections that were previously allowed. This update re-adds the zone drifting behavior, but as a configurable feature. As a result, users can now decide to use zone drifting or disable the behavior for a more secure firewall setup. By default, in RHEL 7.9, the new AllowZoneDrifting parameter in the /etc/firewalld/firewalld.conf file is set to yes . Note that, if the parameter is enabled, firewalld logs: ( BZ#1796055 ) RHEL rotates firewalld log files Previously, RHEL did not rotate firewalld log files. As a consequence, the /var/log/firewalld log file grew indefinitely. This update adds the /etc/logrotate.d/firewalld log rotation configuration file for the firewalld service. As a result, the /var/log/firewalld log is rotated, and users can customize the rotation settings in the /etc/logrotate.d/firewalld file. ( BZ#1754117 ) 6.5. Security Recursive dependencies no longer cause OpenSCAP crashes Because systemd units can have dependent units, OpenSCAP scans could encounter cyclical dependencies that caused the scan to terminate unexpectedly. With this update, OpenSCAP no longer analyses previously analysed units. As a result, scans now complete with a valid result even if dependencies are cyclical. ( BZ#1478285 ) OpenSCAP scanner results no longer contain a lot of SELinux context error messages Previously, the OpenSCAP scanner logged the inability to get the SELinux context on the ERROR level even in situations where it is not a true error. Consequently, scanner results contained a lot of SELinux context error messages and both the oscap command-line utility and the SCAP Workbench graphical utility outputs were hard to read for that reason. The openscap packages have been fixed, and scanner results no longer contain a lot of SELinux context error messages. ( BZ#1640522 ) audit_rules_privileged_commands now works correctly for privileged commands Remediation of the audit_rules_privileged_commands rule in the scap-security-guide packages did not account for a special case in parsing command names. Additionally, the ordering of certain rules prevented successful remediation. As a consequence, remediation of certain combinations of rules reported they were fixed although successive scans reported the rule as failing again. This update improves regular expressions in the rule and the ordering of the rules. As a result, all privileged commands are correctly audited after remediation. ( BZ#1691877 ) Updated rule descriptions in the SCAP Security Guide Because default kernel parameters cannot be reliably determined for all supported versions of RHEL, checking kernel parameter settings always requires explicit configuration. The text in the configuration guide mistakenly stated that explicit settings were not needed if the default version was compliant. With this update, the rule description in the scap-security-guide package correctly describes the compliance evaluation and the corresponding remediation. ( BZ#1494606 ) configure_firewalld_rate_limiting now correctly rate-limits connections The configure_firewalld_rate_limiting rule, which protects the system from Denial of Service (DoS) attacks, previously configured the system to accept all traffic. With this update, the system correctly rate-limits connections after remediating this rule. ( BZ#1609014 ) dconf_gnome_login_banner_text no longer incorrectly fails Remediation of the dconf_gnome_login_banner_text rule in the scap-security-guide packages previously failed after a failure to scan the configuration. As a consequence, the remediation could not properly update the login banner configuration, which was inconsistent with expected results. With this update, Bash and Ansible remediations are more reliable and align with the configuration check implemented using the OVAL standard. As a consequence, remediations now work properly and the rule passes after remediation. ( BZ#1776780 ) scap-security-guide Ansible remediations no longer include the follow argument Prior to this update, scap-security-guide Ansible remediations could contain the follow argument in the replace module. Because follow was deprecated in Ansible 2.5, and will be removed in Ansible 2.10, using such remediations caused an error. With the release of the RHBA-2021:1383 advisory, the argument has been removed. As a result, Ansible playbooks by scap-security-guide will work properly in Ansible 2.10. ( BZ#1890111 ) Postfix-specific rules no longer fail if postfix is not installed Previously, SCAP Security Guide (SSG) evaluated Postfix-specific rules independently of the postfix package installed on the system. As a result, SSG reported Postfix-specific rules as fail instead of notapplicable . With the release of the RHBA-2021:4781 advisory, SSG correctly evaluates Postfix-specific rules only if the postfix package is installed, and reports notapplicable if the postfix package is not installed. ( BZ#1942281 ) Service Disabled rules are no longer ambiguous Previously, rule descriptions for the Service Disabled type in the SCAP Security Guide provided options for disabling and masking a service but did not specify whether the user should disable the service, mask it, or both. With the release of the RHBA-2021:1383 advisory, rule descriptions, remediations, and OVAL checks have been aligned and inform users that they must mask a service to disable it. ( BZ#1891435 ) Fixed Ansible remediations for scap-security-guide GNOME dconf rules Previously, Ansible remediations for some rules covering the GNOME dconf configuration systems were not aligned with the corresponding OVAL checks. Consequently, Ansible incorrectly remediated the following rules, marking them as failed in subsequent scans: dconf_gnome_screensaver_idle_activation_enabled dconf_gnome_screensaver_idle_delay dconf_gnome_disable_automount_open With the update released in the RHBA-2021:4781 advisory, Ansible regular expressions have been fixed. As a result, these rules remediate correctly in the dconf configuration. ( BZ#1976123 ) SELinux no longer blocks PCP from restarting unresponsive PMDAs Previously, a rule that allows pcp_pmie_t processes to communicate with Performance Metric Domain Agent (PMDA) was missing in the SELinux policy. As a consequence, SELinux denied the pmsignal process to restart unresponsive PMDAs. With this update, the missing rule has been added to the policy, and the Performance Co-Pilot (PCP) can now restart unresponsive PMDAs. ( BZ#1770123 ) SELinux no longer prevents auditd to halt or power off the system Previously, the SELinux policy did not contain a rule that allows the Audit daemon to start a power_unit_file_t systemd unit. Consequently, auditd could not halt or power off the system even when configured to do so in cases such as no space left on a logging disk partition. With this update, the missing rule has been added to the SELinux policy. As a result, auditd can now halt or power off the system. ( BZ#1780332 ) The chronyd service can now execute shells in SELinux Previously, the chronyd process, running under chronyd_t , was unable to execute the chrony-helper shell script, because the SELinux policy did not allow chronyd to execute any shell. In this update, the SELinux policy allows the chronyd process to run a shell that is labeled shell_exec_t . As a result, the chronyd service starts successfully under the Multi-Level Security (MLS) policy. (BZ#1775573) Tang reliably updates its cache When the Tang application generates its keys, for example, at first installation, Tang updates its cache. Previously, this process was unreliable, and the application cache did not update correctly to reflect Tang keys. This caused problems with using a Tang pin in Clevis, with the client displaying the error message Key derivation key not available . With this update, key generation and cache update logic was moved to Tang, removing the file watching dependency. As a result, the application cache remains in a correct state after cache update. ( BZ#1703445 ) 6.6. Servers and Services cupsd now consumes less memory during PPD caching Previously, the CUPS daemon consumed a lot of memory when many print queues with extensive Postscript Printer Description (PPD) were created. With this update, CUPSD checks if a cached file exists and if it has newer or the same timestamp as the PPD file in /etc/cups/ppd , then it loads the cached file. Otherwise it creates a new cached file based on the PPD file. As a result, the memory consumption lowers by 91% in the described scenario. (BZ#1672212) tuned no longer hangs on SIGHUP when a non-existent profile is selected When the tuned service receives the SIGHUP signal, it attempts to reload the profile. Prior to this update, tuned was unable to correctly handle situations when: The tuned profile was set to a non-existent profile, or The automatic profile selection mode was active and the recommended profile was non-existent. As a consequence, the tuned service became unresponsive and had to be restarted. This bug has been fixed, and the tuned service no longer hangs in the described scenarios. Note that the tuned behavior has changed with this update. Previously, when the user executed the tuned-adm off command and restarted the tuned service, tuned tried to load the recommended profile. Now, tuned loads no profile even if the recommended profile exists. ( BZ#1702724 ) tuned no longer applies settings from sysctl.d directories when the reapply_sysctl option is set to 1 Previously, if the reapply_sysctl configuration option was set to 1 , the tuned profile applied sysctl settings from the /usr/lib/sysctl.d , /lib/sysctl.d , and /usr/local/lib/sysctl.d directories after applying sysctl settings from a tuned profile. Consequently, settings from these directories would override sysctl settings from the tuned profile. With this update, tuned no longer applies sysctl settings from the mentioned directories when the reapply_sysctl option is set to 1 . Note that to re-apply sysctl settings you need to move them from the mentioned directories to /etc/sysctl.d , /etc/sysctl.conf or /run/sysctl.d directories or to a custom tuned profile. ( BZ#1776149 ) 6.7. Storage LVM volumes on VDO now shut down correctly Previously, the stacking of block layers on VDO was limited by the configuration of the VDO systemd units. As a result, the system shutdown sequence waited for 90 seconds when it tried to stop LVM volumes stored on VDO. After 90 seconds, the system uncleanly stopped the LVM and VDO volumes. With this update, the VDO systemd units have been improved, and as a result, the system shuts down cleanly with LVM on VDO. Additionally, the VDO startup configuration is now more flexible. You no longer have to add special mount options in the /etc/fstab file for most VDO configurations. ( BZ#1706154 ) 6.8. System and Subscription Management microdnf no longer fails to retrieve GPG key for custom Satellite repository Previously, the librhsm library, used internally by microdnf , incorrectly handled relative gpgkey paths, which are used in custom repositories hosted by Satellite. Consequently, when the user ran the microdnf command in a container to install a package signed with GNU Privacy Guard (GPG) from a custom repository through the host's Satellite subscription, microdnf failed with the following error: With this update, handling of relative gpgkey paths has been fixed in librhsm . As a result, the user can now successfully use the custom repository from Satellite inside containers. (BZ#1708628) YUM can now install RPM packages signed with GPG keys with revoked subkeys Previously, the YUM utlity could not install RPM packages signed with GNU Privacy Guard (GPG) keys with revoked subkeys. Consequently, YUM failed with the following error message: This update introduces a change in the code that checks revocation before checking binding signature. As a result, YUM can now install RPM packages signed with GPG keys with revoked subkeys. ( BZ#1778784 ) 6.9. RHEL in cloud environments Using cloud-init to create virtual machines with XFS and swap now works correctly Previously, using the cloud-init utility failed when creating a virtual machine (VM) with an XFS root file system and an enabled swap partition. In addition, the following error message was logged: kernel: swapon: swapfile has holes This update fixes the underlying code, which prevents the problem from occurring. ( BZ#1772505 )
[ "WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Please consider disabling it now.", "GPG enabled: failed to lookup digest in keyring.", "signature X doesn't bind subkey to key, type is subkey revocation" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/bug_fixes
Deploying Red Hat Process Automation Manager on Red Hat OpenShift Container Platform
Deploying Red Hat Process Automation Manager on Red Hat OpenShift Container Platform Red Hat Process Automation Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_red_hat_process_automation_manager_on_red_hat_openshift_container_platform/index
Chapter 16. Backing up and restoring Red Hat Quay on a standalone deployment
Chapter 16. Backing up and restoring Red Hat Quay on a standalone deployment Use the content within this section to back up and restore Red Hat Quay in standalone deployments. 16.1. Backing up Red Hat Quay on standalone deployments This procedure describes how to create a backup of Red Hat Quay on standalone deployments. Procedure Create a temporary backup directory, for example, quay-backup : USD mkdir /tmp/quay-backup The following example command denotes the local directory that the Red Hat Quay was started in, for example, /opt/quay-install : Change into the directory that bind-mounts to /conf/stack inside of the container, for example, /opt/quay-install , by running the following command: USD cd /opt/quay-install Compress the contents of your Red Hat Quay deployment into an archive in the quay-backup directory by entering the following command: USD tar cvf /tmp/quay-backup/quay-backup.tar.gz * Example output: config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key Back up the Quay container service by entering the following command: Redirect the contents of your conf/stack/config.yaml file to your temporary quay-config.yaml file by entering the following command: USD podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml Obtain the DB_URI located in your temporary quay-config.yaml by entering the following command: USD grep DB_URI /tmp/quay-backup/quay-config.yaml Example output: Extract the PostgreSQL contents to your temporary backup directory in a backup .sql file by entering the following command: USD pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql Print the contents of your DISTRIBUTED_STORAGE_CONFIG by entering the following command: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> Export the AWS_ACCESS_KEY_ID by using the access_key credential obtained in Step 7: USD export AWS_ACCESS_KEY_ID=<access_key> Export the AWS_SECRET_ACCESS_KEY by using the secret_key obtained in Step 7: USD export AWS_SECRET_ACCESS_KEY=<secret_key> Sync the quay bucket to the /tmp/quay-backup/blob-backup/ directory from the hostname of your DISTRIBUTED_STORAGE_CONFIG : USD aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2 Example output: It is recommended that you delete the quay-config.yaml file after syncing the quay bucket because it contains sensitive information. The quay-config.yaml file will not be lost because it is backed up in the quay-backup.tar.gz file. 16.2. Restoring Red Hat Quay on standalone deployments This procedure describes how to restore Red Hat Quay on standalone deployments. Prerequisites You have backed up your Red Hat Quay deployment. Procedure Create a new directory that will bind-mount to /conf/stack inside of the Red Hat Quay container: USD mkdir /opt/new-quay-install Copy the contents of your temporary backup directory created in Backing up Red Hat Quay on standalone deployments to the new-quay-install1 directory created in Step 1: USD cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/ Change into the new-quay-install directory by entering the following command: USD cd /opt/new-quay-install/ Extract the contents of your Red Hat Quay directory: USD tar xvf /tmp/quay-backup/quay-backup.tar.gz * Example output: Recall the DB_URI from your backed-up config.yaml file by entering the following command: USD grep DB_URI config.yaml Example output: postgresql://<username>:[email protected]/quay Run the following command to enter the PostgreSQL database server: USD sudo postgres Enter psql and create a new database in 172.24.10.50 to restore the quay databases, for example, example_restore_registry_quay_database , by entering the following command: USD psql "host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123" postgres=> CREATE DATABASE example_restore_registry_quay_database; Example output: Connect to the database by running the following command: postgres=# \c "example-restore-registry-quay-database"; Example output: You are now connected to database "example-restore-registry-quay-database" as user "postgres". Create a pg_trmg extension of your Quay database by running the following command: example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm; Example output: CREATE EXTENSION Exit the postgres CLI by entering the following command: \q Import the database backup to your new database by running the following command: USD psql "host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123" -W < /tmp/quay-backup/quay-backup.sql Example output: Update the value of DB_URI in your config.yaml from postgresql://<username>:[email protected]/quay to postgresql://<username>:[email protected]/example-restore-registry-quay-database before restarting the Red Hat Quay deployment. Note The DB_URI format is DB_URI postgresql://<login_user_name>:<login_user_password>@<postgresql_host>/<quay_database> . If you are moving from one PostgreSQL server to another PostgreSQL server, update the value of <login_user_name> , <login_user_password> and <postgresql_host> at the same time. In the /opt/new-quay-install directory, print the contents of your DISTRIBUTED_STORAGE_CONFIG bundle: USD cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10 Example output: DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name> Note Your DISTRIBUTED_STORAGE_CONFIG in /opt/new-quay-install must be updated before restarting your Red Hat Quay deployment. Export the AWS_ACCESS_KEY_ID by using the access_key credential obtained in Step 13: USD export AWS_ACCESS_KEY_ID=<access_key> Export the AWS_SECRET_ACCESS_KEY by using the secret_key obtained in Step 13: USD export AWS_SECRET_ACCESS_KEY=<secret_key> Create a new s3 bucket by entering the following command: USD aws s3 mb s3://<new_bucket_name> --region us-east-2 Example output: USD make_bucket: quay Upload all blobs to the new s3 bucket by entering the following command: USD aws s3 sync --no-verify-ssl \ --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/ 1 The Red Hat Quay registry endpoint must be the same before backup and after restore. Example output: upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec ... Before restarting your Red Hat Quay deployment, update the storage settings in your config.yaml: DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>
[ "mkdir /tmp/quay-backup", "podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.10.9", "cd /opt/quay-install", "tar cvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "podman inspect quay-app | jq -r '.[0].Config.CreateCommand | .[]' | paste -s -d ' ' - /usr/bin/podman run --name quay-app -v /opt/quay-install/config:/conf/stack:Z -v /opt/quay-install/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.10.9", "podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml", "grep DB_URI /tmp/quay-backup/quay-config.yaml", "postgresql://<username>:[email protected]/quay", "pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql", "DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2", "download: s3://<user_name>/registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a to registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a download: s3://<user_name>/registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d to registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d", "mkdir /opt/new-quay-install", "cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/", "cd /opt/new-quay-install/", "tar xvf /tmp/quay-backup/quay-backup.tar.gz *", "config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key", "grep DB_URI config.yaml", "postgresql://<username>:[email protected]/quay", "sudo postgres", "psql \"host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123\" postgres=> CREATE DATABASE example_restore_registry_quay_database;", "CREATE DATABASE", "postgres=# \\c \"example-restore-registry-quay-database\";", "You are now connected to database \"example-restore-registry-quay-database\" as user \"postgres\".", "example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm;", "CREATE EXTENSION", "\\q", "psql \"host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123\" -W < /tmp/quay-backup/quay-backup.sql", "SET SET SET SET SET", "cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>", "export AWS_ACCESS_KEY_ID=<access_key>", "export AWS_SECRET_ACCESS_KEY=<secret_key>", "aws s3 mb s3://<new_bucket_name> --region us-east-2", "make_bucket: quay", "aws s3 sync --no-verify-ssl --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/", "upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec", "DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/standalone-deployment-backup-restore
Chapter 55. JSLT
Chapter 55. JSLT Since Camel 3.1 Only producer is supported The JSLT component allows you to process a JSON messages using an JSLT expression. This can be ideal when doing JSON to JSON transformation or querying data. 55.1. Dependencies When using jslt with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jslt-starter</artifactId> </dependency> 55.2. URI format Where specName is the classpath-local URI of the specification to invoke; or the complete URL of the remote specification (eg: file://folder/myfile.vm ). 55.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 55.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 55.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 55.4. Component Options The JSLT component supports 5 options, which are listed below. Name Description Default Type allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean functions (advanced) JSLT can be extended by plugging in functions written in Java. Collection objectFilter (advanced) JSLT can be extended by plugging in a custom jslt object filter. JsonFilter 55.4.1. Endpoint Options The JSLT endpoint is configured using URI syntax: with the following path and query parameters: 55.4.1.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 55.4.1.2. Query Parameters (7 parameters) Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean contentCache (producer) Sets whether to use resource content cache or not. false boolean mapBigDecimalAsFloats (producer) If true, the mapper will use the USE_BIG_DECIMAL_FOR_FLOATS in serialization features. false boolean objectMapper (producer) Setting a custom JSON Object Mapper to be used. ObjectMapper prettyPrint (common) If true, JSON in output message is pretty printed. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 55.5. Message Headers The JSLT component supports 2 message header(s), which is/are listed below: Name Description Default Type CamelJsltString (producer) Constant: HEADER_JSLT_STRING The JSLT Template as String. String CamelJsltResourceUri (producer) Constant: HEADER_JSLT_RESOURCE_URI The resource URI. String 55.6. Passing values to JSLT Camel can supply exchange information as variables when applying a JSLT expression on the body. The available variables from the Exchange are: name value headers The headers of the In message as a json object exchange.properties The Exchange properties as a json object. exchange is the name of the variable and properties is the path to the exchange properties. Available if allowContextMapAll option is true. All the values that cannot be converted to json with Jackson are denied and will not be available in the jslt expression. For example, the header named "type" and the exchange property "instance" can be accessed like { "type": USDheaders.type, "instance": USDexchange.properties.instance } 55.7. Samples The sample example is as given below. from("activemq:My.Queue"). to("jslt:com/acme/MyResponse.json"); And a file based resource: from("activemq:My.Queue"). to("jslt:file://myfolder/MyResponse.json?contentCache=true"). to("activemq:Another.Queue"); You can also specify which JSLT expression the component should use dynamically via a header, so for example: from("direct:in"). setHeader("CamelJsltResourceUri").constant("path/to/my/spec.json"). to("jslt:dummy?allowTemplateFromHeader=true"); Or send whole jslt expression via header: (suitable for querying) from("direct:in"). setHeader("CamelJsltString").constant(".published"). to("jslt:dummy?allowTemplateFromHeader=true"); Passing exchange properties to the jslt expression can be done like this from("direct:in"). to("jslt:com/acme/MyResponse.json?allowContextMapAll=true"); 55.8. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.jslt.allow-template-from-header Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false Boolean camel.component.jslt.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jslt.enabled Whether to enable auto configuration of the jslt component. This is enabled by default. Boolean camel.component.jslt.functions JSLT can be extended by plugging in functions written in Java. Collection camel.component.jslt.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jslt.object-filter JSLT can be extended by plugging in a custom jslt object filter. The option is a com.schibsted.spt.data.jslt.filters.JsonFilter type. JsonFilter
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jslt-starter</artifactId> </dependency>", "jslt:specName[?options]", "jslt:resourceUri", "{ \"type\": USDheaders.type, \"instance\": USDexchange.properties.instance }", "from(\"activemq:My.Queue\"). to(\"jslt:com/acme/MyResponse.json\");", "from(\"activemq:My.Queue\"). to(\"jslt:file://myfolder/MyResponse.json?contentCache=true\"). to(\"activemq:Another.Queue\");", "from(\"direct:in\"). setHeader(\"CamelJsltResourceUri\").constant(\"path/to/my/spec.json\"). to(\"jslt:dummy?allowTemplateFromHeader=true\");", "from(\"direct:in\"). setHeader(\"CamelJsltString\").constant(\".published\"). to(\"jslt:dummy?allowTemplateFromHeader=true\");", "from(\"direct:in\"). to(\"jslt:com/acme/MyResponse.json?allowContextMapAll=true\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jslt-component-starter
Chapter 7. Configuring traffic management and metric plugins in Argo Rollouts
Chapter 7. Configuring traffic management and metric plugins in Argo Rollouts Argo Rollouts supports configuring traffic management and metric plugins directly through the RolloutManager Custom Resource (CR). The native support for these plugins in Argo Rollouts eliminates the need to modify the config map manually, ensuring a consistent configuration across the system. As a result, Argo Rollouts no longer preserves user-defined plugins in the config map. Instead, it only applies to the plugins specified within the RolloutManager CR. By managing plugins directly within the RolloutManager CR, you can do the following: Centralize plugin configuration control. Avoid conflicts between the RolloutManager CR and config map. Simplify plugin management by allowing easy addition, removal, or modification of plugins without editing the config map directly. The traffic management plugin controls how traffic routes between different versions of your application during a rollout, while the metric plugin collects and evaluates metrics to determine the success or failure of a rollout. 7.1. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have access to the OpenShift Container Platform web console. You have installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster. You have installed Argo Rollouts on your OpenShift Container Platform cluster. 7.2. Enabling traffic management and metric plugins in Argo Rollouts To enable traffic management and metric plugins in Argo Rollouts, complete the following steps. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. In the Administrator perspective, click Operators Installed Operators . Create or select the project where you want to create and configure a RolloutManager custom resource (CR) from the Project drop-down menu. Select Red Hat OpenShift GitOps from the Installed Operators . In the Details tab, under the Provided APIs section, click Create instance in the RolloutManager pane. On the Create RolloutManager page, select the YAML view and edit the YAML. Example adding the traffic management and metric plugins configuration in the RolloutManager CR apiVersion: argoproj.io/v1alpha1 kind: RolloutManager metadata: name: argo-rollouts spec: plugins: trafficManagement: - name: argoproj-labs/gatewayAPI 1 location: https://github.com/sample-metric-plugin 2 metric: - name: argoproj-labs/sample-prometheus 3 location: https://github.com/sample-trafficrouter-plugin 4 sha256: dac10cbf57633c9832a17f8c27d2ca34aa97dd3d 5 1 Specifies the name of the trafficManagement plugin. 2 Specifies the location of the trafficManagement plugin. 3 Specifies the name of the metric plugin. 4 Specifies the location of the metric plugin. 5 Optional: Specifies the SHA256 signature of the plugin binary, which is downloaded and installed by the Rollouts controller. Click Create . In the RolloutManager tab, under the RolloutManagers section, verify that the Status field of the RolloutManager instance shows as Phase: Available . Verify that the traffic management and metric plugins are installed correctly by completing the following steps: In the Administrator perspective, click Workloads ConfigMaps . Click the argo-rollouts-config config map. As a result, the plugins defined in the RolloutManager CR are updated in the argo-rollouts-config config map. Example updated traffic management and metric plugins in the argo-rollouts-config ConfigMap kind: ConfigMap apiVersion: v1 metadata: name: argo-rollouts-config namespace: argo-rollouts labels: app.kubernetes.io/component: argo-rollouts app.kubernetes.io/name: argo-rollouts app.kubernetes.io/part-of: argo-rollouts data: metricPlugins: | - name: "argoproj-labs/sample-prometheus" 1 location: https://github.com/sample-metric-plugin 2 sha256: dac10cbf57633c9832a17f8c27d2ca34aa97dd3d 3 trafficRouterPlugins: | - name: argoproj-labs/gatewayAPI 4 location: https://github.com/sample-metric-plugin 5 sha256: "" 6 - name: argoproj-labs/openshift 7 location: file:/plugins/rollouts-trafficrouter-openshift/openshift-route-plugin 8 sha256: "" 9 1 Specifies the name of the metric plugin. 2 Specifies the location of the metric plugin. 3 Specifies the sha256 signature of the metric plugin. 4 Specifies the name of the trafficmanagement plugin. 5 Specifies the location of the trafficmanagement plugin. 6 Specifies the sha256 signature of the trafficmanagement plugin. 7 Specifies the name of the default trafficmanagement plugin. 8 Specifies the location of the default trafficmanagement plugin. 9 Specifies the sha256 signature of the trafficmanagement plugin. By configuring traffic and metric plugins directly through the RolloutManager CR, you streamline the rollout process, reduce the chance of errors, and ensure consistent plugin management across your environment. This enhances control and flexibility while simplifying deployment procedures. 7.3. Additional resources Traffic router plugin Metric plugin
[ "apiVersion: argoproj.io/v1alpha1 kind: RolloutManager metadata: name: argo-rollouts spec: plugins: trafficManagement: - name: argoproj-labs/gatewayAPI 1 location: https://github.com/sample-metric-plugin 2 metric: - name: argoproj-labs/sample-prometheus 3 location: https://github.com/sample-trafficrouter-plugin 4 sha256: dac10cbf57633c9832a17f8c27d2ca34aa97dd3d 5", "kind: ConfigMap apiVersion: v1 metadata: name: argo-rollouts-config namespace: argo-rollouts labels: app.kubernetes.io/component: argo-rollouts app.kubernetes.io/name: argo-rollouts app.kubernetes.io/part-of: argo-rollouts data: metricPlugins: | - name: \"argoproj-labs/sample-prometheus\" 1 location: https://github.com/sample-metric-plugin 2 sha256: dac10cbf57633c9832a17f8c27d2ca34aa97dd3d 3 trafficRouterPlugins: | - name: argoproj-labs/gatewayAPI 4 location: https://github.com/sample-metric-plugin 5 sha256: \"\" 6 - name: argoproj-labs/openshift 7 location: file:/plugins/rollouts-trafficrouter-openshift/openshift-route-plugin 8 sha256: \"\" 9" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/argo_rollouts/configuring_traffic_management_and_metric_plugins_in_argo_rollouts
Chapter 6. Btrfs (Technology Preview)
Chapter 6. Btrfs (Technology Preview) Note Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a future major release of Red Hat Enterprise Linux. For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4 Release Notes. Btrfs is a generation Linux file system that offers advanced management, reliability, and scalability features. It is unique in offering snapshots, compression, and integrated device management. 6.1. Creating a btrfs File System In order to make a basic btrfs file system, use the following command: For more information on creating btrfs file systems with added devices and specifying multi-device profiles for metadata and data, refer to Section 6.4, "Integrated Volume Management of Multiple Devices" .
[ "mkfs.btrfs / dev / device" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-btrfs
Chapter 3. Switching a database to read-only mode
Chapter 3. Switching a database to read-only mode Databases of Directory Server run in read-write mode by default, in which users can both retrieve and store data. When you need a faithful image of a database at a given time, for example before a backup or before a manual initialization of a consumer, you may switch a database to read-only mode that prevents users from creating, modifying, or deleting entries. 3.1. Prerequisites The database is in read-write mode. The database is not used in replication, since enabling read-only mode disables replication. 3.2. Switching a database to read-only mode using the command line This procedure instructs you how to switch a Directory Server database to read-only mode on the command line. Procedure List the suffixes and their corresponding databases: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend suffix list dc=example,dc=com (userroot) o=test (test_database) Note the name or suffix of the database that you want to switch. Enable read-only mode with the --enable-readonly parameter and specify the database either by name or suffix: # dsconf -D " cn=Directory Manager " ldap://server.example.com backend suffix set --enable-readonly " test_database " Verification Attempt a write operation to the directory, such as: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: dc=example,dc=com changetype: modify add: description description: foo The server should refuse to perform. modifying entry "dc=example,dc=com" ldap_modify: Server is unwilling to perform (53) additional info: Server is read-only Additional resources Switching an entire instance to read-only mode 3.3. Switching a database to read-only mode using the web console This procedure instructs you how to switch a Directory Server database to read-only mode in a browser. Prerequisites You are logged in to the instance in the web console. Procedure Under Database , select the suffix in the configuration tree. Check the Database Read-Only Mode option. Click Save Configuration . Verification Attempt a write operation to the directory, such as: # ldapmodify -D " cn=Directory Manager " -W -H ldap://server.example.com -x dn: dc=example,dc=com changetype: modify add: description description: foo The server should refuse to perform. modifying entry "dc=example,dc=com" ldap_modify: Server is unwilling to perform (53) additional info: Server is read-only 3.4. Additional resources Backup up Directory Server
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend suffix list dc=example,dc=com (userroot) o=test (test_database)", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com backend suffix set --enable-readonly \" test_database \"", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: dc=example,dc=com changetype: modify add: description description: foo", "modifying entry \"dc=example,dc=com\" ldap_modify: Server is unwilling to perform (53) additional info: Server is read-only", "ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: dc=example,dc=com changetype: modify add: description description: foo", "modifying entry \"dc=example,dc=com\" ldap_modify: Server is unwilling to perform (53) additional info: Server is read-only" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_directory_databases/assembly_switching-a-database-to-read-only-mode_configuring-directory-databases
Chapter 9. Using Quality of Service (QoS) policies to manage data traffic
Chapter 9. Using Quality of Service (QoS) policies to manage data traffic You can offer varying service levels for VM instances by using quality of service (QoS) policies to apply rate limits to egress and ingress traffic on Red Hat OpenStack Platform (RHOSP) networks. You can apply QoS policies to individual ports, or apply QoS policies to a project network, where ports with no specific policy attached inherit the policy. Note Internal network owned ports, such as DHCP and internal router ports, are excluded from network policy application. You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum bandwidth QoS policies, you can only apply modifications when there are no instances that use any of the ports the policy is assigned to. 9.1. QoS rules You can configure the following rule types to define a quality of service (QoS) policy in the Red Hat OpenStack Platform (RHOSP) Networking service (neutron): Minimum bandwidth ( minimum_bandwidth ) Provides minimum bandwidth constraints on certain types of traffic. If implemented, best efforts are made to provide no less than the specified bandwidth to each port on which the rule is applied. Bandwidth limit ( bandwidth_limit ) Provides bandwidth limitations on networks, ports, floating IPs, and router gateway IPs. If implemented, any traffic that exceeds the specified rate is dropped. DSCP marking ( dscp_marking ) Marks network traffic with a Differentiated Services Code Point (DSCP) value. QoS policies can be enforced in various contexts, including virtual machine instance placements, floating IP assignments, and gateway IP assignments. Depending on the enforcement context and on the mechanism driver you use, a QoS rule affects egress traffic (upload from instance), ingress traffic (download to instance), or both. Table 9.1. Supported traffic direction by driver (all QoS rule types) Rule [8] Supported traffic direction by mechanism driver ML2/OVS ML2/SR-IOV ML2/OVN Minimum bandwidth Egress only [4][5] Egress only Currently, no support [6] Bandwidth limit Egress [1][2] and ingress Egress only [3] Egress and ingress DSCP marking Egress only N/A Egress only [7] [1] The OVS egress bandwidth limit is performed in the TAP interface and is traffic policing, not traffic shaping. [2] In RHOSP 16.2.2 and later, the OVS egress bandwidth limit is supported in hardware offloaded ports by applying the QoS policy in the network interface using ip link commands. [3] The mechanism drivers ignore the max-burst-kbits parameter because they do not support it. [4] Rule applies only to non-tunnelled networks: flat and VLAN. [5] The OVS egress minimum bandwidth is supported in hardware offloaded ports by applying the QoS policy in the network interface using ip link commands. [6] https://bugzilla.redhat.com/show_bug.cgi?id=2060310 [7] ML2/OVN does not support DSCP marking on tunneled protocols. [8] RHOSP does not support QoS for trunk ports. Table 9.2. Supported traffic direction by driver for placement reporting and scheduling (minimum bandwidth only) Enforcement type Supported traffic by direction mechanism driver ML2/OVS ML2/SR-IOV ML2/OVN Placement Egress and ingress Egress and ingress Currently, no support Table 9.3. Supported traffic direction by driver for enforcement types (bandwidth limit only) Enforcement type Supported traffic direction by mechanism driver ML2/OVS ML2/OVN Floating IP Egress and ingress Egress and ingress Gateway IP Egress and ingress Currently, no support [1] [1] https://bugzilla.redhat.com/show_bug.cgi?id=2064185 Additional resources Creating and applying a bandwidth limit QoS policy and rule Creating and applying a guaranteed minimum bandwidth QoS policy and rule Creating and applying a DSCP marking QoS policy and rule for egress traffic 9.2. Configuring the Networking service for QoS policies The quality of service feature in the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) is provided through the qos service plug-in. With the ML2/OVS and ML2/OVN mechanism drivers, qos is loaded by default. However, this is not true for ML2/SR-IOV. When using the qos service plug-in with the ML2/OVS and ML2/SR-IOV mechanism drivers, you must also load the qos extension on their respective agents. The following list summarizes the tasks that you must perform to configure the Networking service for QoS. The task details follow this list: For all types of QoS policies: Add the qos service plug-in. Add qos extension for the agents (OVS and SR-IOV only). Additional tasks for scheduling VM instances using minimum bandwidth policies only: Specify the hypervisor name if it differs from the name that the Compute service (nova) uses. Configure the resource provider ingress and egress bandwidths for the relevant agents on each Compute node. (Optional) Mark vnic_types as not supported. Additional task for DSCP marking policies on systems that use ML/OVS with tunneling only: Set dscp_inherit to true . Prerequisites You must be the stack user with access to the RHOSP undercloud. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Confirm that the qos service plug-in is not already loaded. If the qos service plug-in is not loaded, then you receive a ResourceNotFound error. If you do not receive the error, then the plug-in is loaded and you do not need to perform the steps in this topic. Create a YAML custom environment file. Example Your environment file must contain the keywords parameter_defaults . On a new line below parameter_defaults add qos to the NeutronServicePlugins parameter: If you use ML2/OVS and ML2/SR-IOV mechanism drivers, then you must also load the qos extension on the agent, by using either the NeutronAgentExtensions or the NeutronSriovAgentExtensions variable, respectively: ML2/OVS ML2/SR-IOV If you want to schedule VM instances by using minimum bandwidth QoS policies, then you must also do the following: Add placement to the list of plug-ins and ensure the list also includes qos : If the hypervisor name matches the canonical hypervisor name used by the Compute service (nova), skip to step 7.iii. If the hypervisor name does not match the canonical hypervisor name used by the Compute service, specify the alternative hypervisor name, using resource_provider_default_hypervisor : ML2/OVS ML2/SR-IOV Important Another method for setting the alternative hypervisor name is to use resource_provider_hypervisor : ML2/OVS ML2/SR-IOV Configure the resource provider ingress and egress bandwidths for the relevant agents on each Compute node that needs to provide a minimum bandwidth. You can configure egress, ingress, or both, using the following formats: Configure only egress bandwidth, in kbps: Configure only ingress bandwidth, in kbps: Configure both egress and ingress bandwidth, in kbps: Example - OVS agent To configure the resource provider ingress and egress bandwidths for the OVS agent, add the following configuration to your network environment file: Example - SRIOV agent To configure the resource provider ingress and egress bandwidths for the SRIOV agent, add the following configuration to your network environment file: Optional: To mark vnic_types as not supported when multiple ML2 mechanism drivers support them by default and multiple agents are being tracked in the Placement service, also add the following configuration to your environment file: Example - OVS agent Example - SRIOV agent If you want to create DSCP marking policies and use ML2/OVS with a tunneling protocol (VXLAN or GRE), then under NeutronAgentExtensions , add the following lines: When dscp_inherit is true , the Networking service copies the DSCP value of the inner header to the outer header. Run the deployment command and include the core heat templates, other environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Confirm that the qos service plug-in is loaded: If the qos service plug-in is loaded, then you do not receive a ResourceNotFound error. Additional resources Extension drivers for the RHOSP Networking service Environment files in the Advanced Overcloud Customization guide Including environment files in overcloud creation in the Advanced Overcloud Customization guide Section 9.3.1, "Using Networking service back-end enforcement to enforce minimum bandwidth" Section 9.3.2, "Scheduling instances by using minimum bandwidth QoS policies" Section 9.4, "Limiting network traffic by using QoS policies" Section 9.5, "Prioritizing network traffic by using DSCP marking QoS policies" 9.3. Controlling minimum bandwidth by using QoS policies For the Red Hat OpenStack Platform (RHOSP) Networking service (neutron), a guaranteed minimum bandwidth QoS rule can be enforced in two distinct contexts: Networking service back-end enforcement and resource allocation scheduling enforcement. The network back end, ML2/OVS or ML2/SR-IOV, attempts to guarantee that each port on which the rule is applied has no less than the specified network bandwidth. When you use resource allocation scheduling bandwidth enforcement, the Compute service (nova) only places VM instances on hosts that support the minimum bandwidth. You can apply QoS minumum bandwidth rules using Networking service back-end enforcement, resource allocation scheduling enforcement, or both. The following table identifies the Modular Layer 2 (ML2) mechanism drivers that support minimum bandwidth QoS policies. Table 9.4. ML2 mechanism drivers that support minimum bandwidth QoS ML2 mechanism driver Agent VNIC types ML2/SR-IOV sriovnicswitch direct ML2/OVS openvswitch normal Additional resources Section 9.3.1, "Using Networking service back-end enforcement to enforce minimum bandwidth" Section 9.3.2, "Scheduling instances by using minimum bandwidth QoS policies" 9.3.1. Using Networking service back-end enforcement to enforce minimum bandwidth You can guarantee a minimum bandwidth for network traffic for ports by applying Red Hat OpenStack Platform (RHOSP) quality of service (QoS) policies to the ports. These ports must be backed by a flat or VLAN physical network. Note Currently, the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN) does not support minimum bandwidth QoS rules. Prerequisites The RHOSP Networking service (neutron) must have the qos service plug-in loaded. (This is the default.) Do not mix ports with and without bandwidth guarantees on the same physical interface, because this might cause denial of necessary resources (starvation) to the ports without a guarantee. Tip Create host aggregates to separate ports with bandwidth guarantees from those ports without bandwidth guarantees. Procedure Source your credentials file. Example Confirm that the qos service plug-in is loaded in the Networking service: If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you must load the qos services plug-in before you can continue. For more information, see Section 9.2, "Configuring the Networking service for QoS policies" . Identify the ID of the project you want to create the QoS policy for: Sample output Using the project ID from the step, create a QoS policy for the project. Example In this example, a QoS policy named guaranteed_min_bw is created for the admin project: Configure the rules for the policy. Example In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps are created for the policy named guaranteed_min_bw : Configure a port to apply the policy to. Example In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12 : Verification ML2/SR-IOV Using root access, log in to the Compute node, and show the details of the virtual functions that are held in the physical function. Example Sample output ML2/OVS Using root access, log in to the compute node, show the tc rules and classes on the physical bridge interface. Example Sample output Additional resources network qos policy create in the Command Line Interface Reference network qos rule create in the Command Line Interface Reference port set in the Command Line Interface Reference 9.3.2. Scheduling instances by using minimum bandwidth QoS policies You can apply a minimum bandwidth QoS policy to a port to guarantee that the host on which its Red Hat OpenStack Platform (RHOSP) VM instance is spawned has a minimum network bandwidth. Prerequisites The RHOSP Networking service (neutron) must have the qos and placement service plug-ins loaded. The qos service plug-in is loaded by default. The Networking service must support the following API extensions: agent-resources-synced port-resource-request qos-bw-minimum-ingress You must use the ML2/OVS or ML2/SR-IOV mechanism drivers. You can only modify a minimum bandwidth QoS policy when there are no instances using any of the ports the policy is assigned to. The Networking service cannot update the Placement API usage information if a port is bound. The Placement service must support microversion 1.29. The Compute service (nova) must support microversion 2.72. Procedure Source your credentials file. Example Confirm that the qos service plug-in is loaded in the Networking service: If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you must load the qos services plug-in before you can continue. For more information, see Section 9.2, "Configuring the Networking service for QoS policies" . Identify the ID of the project you want to create the QoS policy for: Sample output Using the project ID from the step, create a QoS policy for the project. Example In this example, a QoS policy named guaranteed_min_bw is created for the admin project: Configure the rules for the policy. Example In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps are created for the policy named guaranteed_min_bw : Configure a port to apply the policy to. Example In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12 : Verification Log in to the undercloud host as the stack user. Source the undercloud credentials file: List all the available resource providers: Sample output Check the bandwidth a specific resource provides. Example In this example, the bandwidth provided by interface enp6s0f1 on the host dell-r730-014 is checked, using the resource provider UUID, e518d381-d590-5767-8f34-c20def34b252 : Sample output To check claims against the resource provider when instances are running, run the following command: Example In this example, claims against the resource provider are checked on the host, dell-r730-014 , using the resource provider UUID, e518d381-d590-5767-8f34-c20def34b252 : Sample output Additional resources network qos policy create in the Command Line Interface Reference network qos rule create in the Command Line Interface Reference port set in the Command Line Interface Reference 9.4. Limiting network traffic by using QoS policies You can create a Red Hat OpenStack Platform (RHOSP) Networking service (neutron) quality of service (QoS) policy that limits the bandwidth on your RHOSP networks, ports, or floating IPs, and drops any traffic that exceeds the specified rate. Prerequisites The Networking service must have the qos service plug-in loaded.(The plug-in is loaded by default.) Procedure Source your credentials file. Example Confirm that the qos service plug-in is loaded in the Networking service: If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you must load the qos services plug-in before you can continue. For more information, see Section 9.2, "Configuring the Networking service for QoS policies" . Identify the ID of the project you want to create the QoS policy for: Sample output Using the project ID from the step, create a QoS policy for the project. Example In this example, a QoS policy named bw-limiter is created for the admin project: Configure the rules for the policy. Note You can add more than one rule to a policy, as long as the type or direction of each rule is different. For example, You can specify two bandwidth-limit rules, one with egress and one with ingress direction. Example In this example, QoS ingress and egress rules are created for the policy named bw-limiter with a bandwidth limit of 50000 kbps and a maximum burst size of 50000 kbps: You can create a port with a policy attached to it, or attach a policy to a pre-existing port. Example - create a port with a policy attached In this example, the policy bw-limiter is associated with port port2 : Sample output Example - attach a policy to a pre-existing port In this example, the policy bw-limiter is associated with port1 : Verification Confirm that the bandwith limit policy is applied to the port. Obtain the policy ID. Example In this example, the QoS policy, bw-limiter is queried: Sample output Query the port, and confirm that its policy ID matches the one obtained in the step. Example In this example, port1 is queried: Sample output Additional resources network qos rule create in the Command Line Interface Reference network qos rule set in the Command Line Interface Reference network qos rule delete in the Command Line Interface Reference network qos rule list in the Command Line Interface Reference 9.5. Prioritizing network traffic by using DSCP marking QoS policies You can use differentiated services code point (DSCP) to implement quality of service (QoS) policies on your Red Hat OpenStack Platform (RHOSP) network by embedding relevant values in the IP headers. The RHOSP Networking service (neutron) QoS policies can use DSCP marking to manage only egress traffic on neutron ports and networks. Prerequisites The Networking service must have the qos service plug-in loaded. (This is the default.) You must use the ML2/OVS or ML2/OVN mechanism drivers. Procedure Source your credentials file. Example Confirm that the qos service plug-in is loaded in the Networking service: If the qos service plug-in is not loaded, then you receive a ResourceNotFound error, and you must configure the Networking service before you can continue. For more information, see Section 9.2, "Configuring the Networking service for QoS policies" . Identify the ID of the project you want to create the QoS policy for: Sample output Using the project ID from the step, create a QoS policy for the project. Example In this example, a QoS policy named qos-web-servers is created for the admin project: Create a DSCP rule and apply it to a policy. Example In this example, a DSCP rule is created using DSCP mark 18 and is applied to the qos-web-servers policy: Sample output You can change the DSCP value assigned to a rule. Example In this example, the DSCP mark value is changed to 22 for the rule, d7f976ec-7fab-4e60-af70-f59bf88198e6 , in the qos-web-servers policy: You can delete a DSCP rule. Example In this example, the DSCP rule, d7f976ec-7fab-4e60-af70-f59bf88198e6 , in the qos-web-servers policy is deleted: Verification Confirm that the DSCP rule is applied to the QoS policy. Example In this example, the DSCP rule, d7f976ec-7fab-4e60-af70-f59bf88198e6 is applied to the QoS policy, qos-web-servers : Sample output Additional resources network qos rule create in the Command Line Interface Reference network qos rule set in the Command Line Interface Reference network qos rule delete in the Command Line Interface Reference network qos rule list in the Command Line Interface Reference 9.6. Applying QoS policies to projects by using Networking service RBAC With the Red Hat OpenStack Platform (RHOSP) Networking service (neutron), you can add a role-based access control (RBAC) for quality of service (QoS) policies. As a result, you can apply QoS policies to individual projects. Prerequisities You must have one or more QoS policies available. Procedure Create an RHOSP Networking service RBAC policy associated with a specific QoS policy, and assign it to a specific project: Example For example, you might have a QoS policy that allows for lower-priority network traffic, named bw-limiter . Using a RHOSP Networking service RBAC policy, you can apply the QoS policy to a specific project: Additional resources network rbac create in the Command Line Interface Reference Section 9.3.1, "Using Networking service back-end enforcement to enforce minimum bandwidth" Section 9.3.2, "Scheduling instances by using minimum bandwidth QoS policies" Section 9.4, "Limiting network traffic by using QoS policies" Section 9.5, "Prioritizing network traffic by using DSCP marking QoS policies"
[ "source ~/stackrc", "openstack network qos policy list", "vi /home/stack/templates/my-neutron-environment.yaml", "parameter_defaults: NeutronServicePlugins: \"qos\"", "parameter_defaults: NeutronServicePlugins: \"qos\" NeutronAgentExtensions: \"qos\"", "parameter_defaults: NeutronServicePlugins: \"qos\" NeutronSriovAgentExtensions: \"qos\"", "parameter_defaults: NeutronServicePlugins: \"qos,placement\"", "parameter_defaults: NeutronServicePlugins: \"qos,placement\" ExtraConfig: Neutron::agents::ml2::ovs::resource_provider_default_hypervisor: %{hiera('fqdn_canonical')}", "parameter_defaults: NeutronServicePlugins: \"qos,placement\" ExtraConfig: Neutron::agents::ml2::sriov::resource_provider_default_hypervisor: %{hiera('fqdn_canonical')}", "parameter_defaults: ExtraConfig: Neutron::agents::ml2::ovs::resource_provider_hypervisors:\"ens5:%{hiera('fqdn_canonical')},ens6:%{hiera('fqdn_canonical')}\"", "parameter_defaults: ExtraConfig: Neutron::agents::ml2::sriov::resource_provider_hypervisors: \"ens5:%{hiera('fqdn_canonical')},ens6:%{hiera('fqdn_canonical')}\"", "NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:,<bridge1>:<egress_kbps>:,...,<bridgeN>:<egress_kbps>:", "NeutronOvsResourceProviderBandwidths: <bridge0>::<ingress_kbps>,<bridge1>::<ingress_kbps>,...,<bridgeN>::<ingress_kbps>", "NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:<ingress_kbps>,<bridge1>:<egress_kbps>:<ingress_kbps>,...,<bridgeN>:<egress_kbps>:<ingress_kbps>", "parameter_defaults: NeutronBridgeMappings: physnet0:br-physnet0 NeutronOvsResourceProviderBandwidths: br-physnet0:10000000:10000000", "parameter_defaults: NeutronML2PhysicalNetworkMtus: physnet0:1500,physnet1:1500 NeutronSriovResourceProviderBandwidths: ens5:40000000:40000000,ens6:40000000:40000000", "parameter_defaults: NeutronOvsVnicTypeBlacklist: direct", "parameter_defaults: NeutronSriovVnicTypeBlacklist: direct", "parameter_defaults: ControllerExtraConfig: neutron::config::server_config: agent/dscp_inherit: value: true", "openstack overcloud deploy --templates -e <other_environment_files> -e /home/stack/templates/my-neutron-environment.yaml", "openstack network qos policy list", "source ~/overcloudrc", "openstack network qos policy list", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network qos policy create --share --project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw", "openstack network qos rule create --type minimum-bandwidth --min-kbps 40000000 --ingress guaranteed_min_bw openstack network qos rule create --type minimum-bandwidth --min-kbps 40000000 --egress guaranteed_min_bw", "openstack port set --qos-policy guaranteed_min_bw 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12", "ip -details link show enp4s0f1", "50: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master mx-bond state UP mode DEFAULT group default qlen 1000 link/ether 98:03:9b:9d:73:74 brd ff:ff:ff:ff:ff:ff permaddr 98:03:9b:9d:73:75 promiscuity 0 minmtu 68 maxmtu 9978 bond_slave state BACKUP mii_status UP link_failure_count 0 perm_hwaddr 98:03:9b:9d:73:75 queue_id 0 addrgenmode eui64 numtxqueues 320 numrxqueues 40 gso_max_size 65536 gso_max_segs 65535 portname p1 switchid 74739d00039b0398 vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 4 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 5 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 6 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 7 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off vf 8 link/ether fa:16:3e:2a:d2:7f brd ff:ff:ff:ff:ff:ff, tx rate 999 (Mbps), max_tx_rate 999Mbps, spoof checking off, link-state disable, trust off, query_rss off vf 9 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable, trust off, query_rss off", "tc class show dev mx-bond", "class htb 1:11 parent 1:fffe prio 0 rate 4Gbit ceil 34359Mbit burst 9000b cburst 8589b class htb 1:1 parent 1:fffe prio 0 rate 72Kbit ceil 34359Mbit burst 9063b cburst 8589b class htb 1:fffe root rate 34359Mbit ceil 34359Mbit burst 8589b cburst 8589b", "source ~/overcloudrc", "openstack network qos policy list", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network qos policy create --share --project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw", "openstack network qos rule create --type minimum-bandwidth --min-kbps 40000000 --ingress guaranteed_min_bw openstack network qos rule create --type minimum-bandwidth --min-kbps 40000000 --egress guaranteed_min_bw", "openstack port set --qos-policy guaranteed_min_bw 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12", "source ~/stackrc", "openstack --os-placement-api-version 1.17 resource provider list", "+--------------------------------------+-----------------------------------------------------+------------+--------------------------------------+--------------------------------------+ | uuid | name | generation | root_provider_uuid | parent_provider_uuid | +--------------------------------------+-----------------------------------------------------+------------+--------------------------------------+--------------------------------------+ | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | dell-r730-014.localdomain | 28 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | None | | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | dell-r730-063.localdomain | 18 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | None | | e2f5082a-c965-55db-acb3-8daf9857c721 | dell-r730-063.localdomain:NIC Switch agent | 0 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | | d2fb0ef4-2f45-53a8-88be-113b3e64ba1b | dell-r730-014.localdomain:NIC Switch agent | 0 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | | f1ca35e2-47ad-53a0-9058-390ade93b73e | dell-r730-063.localdomain:NIC Switch agent:enp6s0f1 | 13 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | e2f5082a-c965-55db-acb3-8daf9857c721 | | e518d381-d590-5767-8f34-c20def34b252 | dell-r730-014.localdomain:NIC Switch agent:enp6s0f1 | 19 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | d2fb0ef4-2f45-53a8-88be-113b3e64ba1b | +--------------------------------------+-----------------------------------------------------+------------+--------------------------------------+--------------------------------------+", "(undercloud)USD openstack --os-placement-api-version 1.17 resource provider inventory list <rp_uuid>", "[stack@dell-r730-014 nova]USD openstack --os-placement-api-version 1.17 resource provider inventory list e518d381-d590-5767-8f34-c20def34b252", "+----------------------------+------------------+----------+------------+----------+-----------+----------+ | resource_class | allocation_ratio | min_unit | max_unit | reserved | step_size | total | +----------------------------+------------------+----------+------------+----------+-----------+----------+ | NET_BW_EGR_KILOBIT_PER_SEC | 1.0 | 1 | 2147483647 | 0 | 1 | 10000000 | | NET_BW_IGR_KILOBIT_PER_SEC | 1.0 | 1 | 2147483647 | 0 | 1 | 10000000 | +----------------------------+------------------+----------+------------+----------+-----------+----------+", "(undercloud)USD openstack --os-placement-api-version 1.17 resource provider show --allocations <rp_uuid>", "[stack@dell-r730-014 nova]USD openstack --os-placement-api-version 1.17 resource provider show --allocations e518d381-d590-5767-8f34-c20def34b252 -f value -c allocations", "{3cbb9e07-90a8-4154-8acd-b6ec2f894a83: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC: 1000000}}, 8848b88b-4464-443f-bf33-5d4e49fd6204: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC: 1000000}}, 9a29e946-698b-4731-bc28-89368073be1a: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC: 1000000}}, a6c83b86-9139-4e98-9341-dc76065136cc: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC: 3000000}}, da60e33f-156e-47be-a632-870172ec5483: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC: 1000000}}, eb582a0e-8274-4f21-9890-9a0d55114663: {resources: {NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC: 3000000}}}", "source ~/overcloudrc", "openstack network qos policy list", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network qos policy create --share --project 98a2f53c20ce4d50a40dac4a38016c69 bw-limiter", "openstack network qos rule create --type bandwidth-limit --max-kbps 50000 --max-burst-kbits 50000 --ingress bw-limiter openstack network qos rule create --type bandwidth-limit --max-kbps 50000 --max-burst-kbits 50000 --egress bw-limiter", "openstack port create --qos-policy bw-limiter --network private port2", "+-----------------------+--------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2022-07-04T19:20:24Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | dns_assignment | None | | dns_name | None | | extra_dhcp_opts | | | fixed_ips | ip_address='192.0.2.210', subnet_id='292f8c-...' | | id | f51562ee-da8d-42de-9578-f6f5cb248226 | | ip_address | None | | mac_address | fa:16:3e:d9:f2:ba | | name | port2 | | network_id | 55dc2f70-0f92-4002-b343-ca34277b0234 | | option_name | None | | option_value | None | | port_security_enabled | False | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | | qos_policy_id | 8491547e-add1-4c6c-a50e-42121237256c | | revision_number | 6 | | security_group_ids | 0531cc1a-19d1-4cc7-ada5-49f8b08245be | | status | DOWN | | subnet_id | None | | tags | [] | | trunk_details | None | | updated_at | 2022-07-04T19:23:00Z | +-----------------------+--------------------------------------------------+", "openstack port set --qos-policy bw-limiter port1", "openstack network qos policy show bw-limiter", "+-------------------+-------------------------------------------------------------------+ | Field | Value | +-------------------+-------------------------------------------------------------------+ | description | | | id | 8491547e-add1-4c6c-a50e-42121237256c | | is_default | False | | name | bw-limiter | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | | revision_number | 4 | | rules | [{u'max_kbps': 50000, u'direction': u'egress', | | | u'type': u'bandwidth_limit', | | | u'id': u'0db48906-a762-4d32-8694-3f65214c34a6', | | | u'max_burst_kbps': 50000, | | | u'qos_policy_id': u'8491547e-add1-4c6c-a50e-42121237256c'}, | | | [{u'max_kbps': 50000, u'direction': u'ingress', | | | u'type': u'bandwidth_limit', | | | u'id': u'faabef24-e23a-4fdf-8e92-f8cb66998834', | | | u'max_burst_kbps': 50000, | | | u'qos_policy_id': u'8491547e-add1-4c6c-a50e-42121237256c'}] | | shared | False | +-------------------+-------------------------------------------------------------------+", "openstack port show port1", "+-------------------------+--------------------------------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | ip_address='192.0.2.128', mac_address='fa:16:3e:e1:eb:73' | | binding_host_id | compute-2.redhat.local | | binding_profile | | | binding_vif_details | port_filter='True' | | binding_vif_type | ovs | | binding_vnic_type | normal | | created_at | 2022-07-04T19:07:56 | | data_plane_status | None | | description | | | device_id | 53abd2c4-955d-4b44-b6ad-f106e3f15df0 | | device_owner | compute:nova | | dns_assignment | fqdn='host-192-0-2-213.openstacklocal.', hostname='my-host3', | | | ip_address='192.0.2.213' | | dns_domain | None | | dns_name | | | extra_dhcp_opts | | | fixed_ips | ip_address='192.0.2..213', subnet_id='641d1db2-3b40-437b-b87b-63 | | | 079a7063ca' | | | ip_address='2001:db8:0:f868:f816:3eff:fee1:eb73', subnet_id='c7ed0 | | | 70a-d2ee-4380-baab-6978932a7dcc' | | id | 56x9aiw1-2v74-144x-c2q8-ed8w423a6s12 | | location | cloud='', project.domain_id=, project.domain_name=, project.id='7c | | | b99d752fdb4944a2208ec9ee019226', project.name=, region_name='regio | | | nOne', zone= | | mac_address | fa:16:3e:e1:eb:73 | | name | port2 | | network_id | 55dc2f70-0f92-4002-b343-ca34277b0234 | | port_security_enabled | True | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | | propagate_uplink_status | None | | qos_policy_id | 8491547e-add1-4c6c-a50e-42121237256c | | resource_request | None | | revision_number | 6 | | security_group_ids | 4cdeb836-b5fd-441e-bd01-498d758704fd | | status | ACTIVE | | tags | | | trunk_details | None | | updated_at | 2022-07-04T19:11:41Z | +-------------------------+--------------------------------------------------------------------+", "source ~/overcloudrc", "openstack network qos policy list", "openstack project list", "+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+", "openstack network qos policy create --project 98a2f53c20ce4d50a40dac4a38016c69 qos-web-servers", "openstack network qos rule create --type dscp-marking --dscp-mark 18 qos-web-servers", "Created a new dscp_marking_rule: +-----------+--------------------------------------+ | Field | Value | +-----------+--------------------------------------+ | dscp_mark | 18 | | id | d7f976ec-7fab-4e60-af70-f59bf88198e6 | +-----------+--------------------------------------+", "openstack network qos rule set --dscp-mark 22 qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6", "openstack network qos rule delete qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6", "openstack network qos rule list qos-web-servers", "+-----------+--------------------------------------+ | dscp_mark | id | +-----------+--------------------------------------+ | 18 | d7f976ec-7fab-4e60-af70-f59bf88198e6 | +-----------+--------------------------------------+", "openstack network rbac create --type qos_policy --target-project <project_name | project_ID> --action access_as_shared <QoS_policy_name | QoS_policy_ID>", "openstack network rbac create --type qos_policy --target-project 80bf5732752a41128e612fe615c886c6 --action access_as_shared bw-limiter" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/config-qos-policies_rhosp-network
Chapter 3. Getting started with Argo Rollouts
Chapter 3. Getting started with Argo Rollouts Argo Rollouts supports canary and blue-green deployment strategies. This guide provides instructions with examples using a canary deployment strategy to help you deploy, update, promote and manually abort rollouts. With a canary-based deployment strategy, you split traffic between two application versions: Canary version : A new version of an application where you gradually route the traffic. Stable version : The current version of an application. After the canary version is stable and has all the user traffic directed to it, it becomes the new stable version. The stable version is discarded. 3.1. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have access to the OpenShift Container Platform web console. You have installed Red Hat OpenShift GitOps on your OpenShift Container Platform cluster. You have installed Argo Rollouts on your OpenShift Container Platform cluster. You have installed the Argo Rollouts CLI on your system. 3.2. Deploying a rollout As a cluster administrator, you can configure Argo Rollouts to progressively route a subset of user traffic to a new application version. Then you can test whether the application is deployed and working. The following example procedure creates a rollouts-demo rollout and service. The rollout then routes 20% of traffic to a canary version of the application, waits for a manual promotion, and then performs multiple automated promotions until it routes the entire traffic to the new application version. Procedure In the Administrator perspective of the web console, click Operators Installed Operators Red Hat OpenShift GitOps Rollout . Create or select the project in which you want to create and configure a Rollout custom resource (CR) from the Project drop-down menu. Click Create Rollout and enter the following configuration in the YAML view: apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollouts-demo spec: replicas: 5 strategy: canary: 1 steps: 2 - setWeight: 20 3 - pause: {} 4 - setWeight: 40 - pause: {duration: 45} 5 - setWeight: 60 - pause: {duration: 20} - setWeight: 80 - pause: {duration: 10} revisionHistoryLimit: 2 selector: matchLabels: app: rollouts-demo template: 6 metadata: labels: app: rollouts-demo spec: containers: - name: rollouts-demo image: argoproj/rollouts-demo:blue ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 32Mi cpu: 5m 1 The deployment strategy that the rollout must use. 2 Specify the steps for the rollout. This example gradually routes 20%, 40%, 60%, and 80% of traffic to the canary version. 3 The percentage of traffic that must be directed to the canary version. A value of 20 means that 20% of traffic is directed to the canary version. 4 Specify to the Argo Rollouts controller to pause indefinitely until it finds a request for promotion. 5 Specify to the Argo Rollouts controller to pause for a duration of 45 seconds. You can set the duration value in seconds ( s ), minutes ( m ), or hours ( h ). For example, you can specify 1h for an hour. If no value is specified, the duration value defaults to seconds. 6 Specifies the pods that are to be created. Click Create . Note To ensure that the rollout becomes available quickly on creation, the Argo Rollouts controller automatically treats the argoproj/rollouts-demo:blue initial container image specified in the .spec.template.spec.containers.image field as a stable version. In the initial instance, the creation of the Rollout resource routes all of the traffic towards the stable version of the application and skips the part where the traffic is sent to the canary version. However, for all subsequent application upgrades with the modifications to the .spec.template.spec.containers.image field, the Argo Rollouts controller performs the canary steps, as usual. Verify that your rollout was created correctly by running the following command: USD oc argo rollouts list rollouts -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. Example output NAME STRATEGY STATUS STEP SET-WEIGHT READY DESIRED UP-TO-DATE AVAILABLE rollouts-demo Canary Healthy 8/8 100 5/5 5 5 5 Create the Kubernetes services that targets the rollouts-demo rollout. In the Administrator perspective of the web console, click Networking Services . Click Create Service and enter the following configuration in the YAML view: apiVersion: v1 kind: Service metadata: name: rollouts-demo spec: ports: 1 - port: 80 targetPort: http protocol: TCP name: http selector: 2 app: rollouts-demo 1 Specifies the name of the port used by the application for running inside the container. 2 Ensure that the contents of the selector field are the same as in the Rollout custom resource (CR). Click Create . Rollouts automatically update the created service with pod template hash of the canary ReplicaSet . For example, rollouts-pod-template-hash: 687d76d795 . Watch the progression of your rollout by running the following command: USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. Example output Name: rollouts-demo Namespace: spring-petclinic Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: argoproj/rollouts-demo:blue (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 4m50s └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 4m50s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-bv5zf Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 4m49s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 4m49s ready:1/1 After the rollout has been created, you can verify that the Status field of the rollout shows Phase: Healthy . In the Rollout tab, under the Rollouts section, verify that the Status field of the rollouts-demo rollout shows as Phase: Healthy . Tip Alternatively, you can verify that the rollout is healthy by running the following command: USD oc argo rollouts status rollouts-demo -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. Example output Healthy You are now ready to perform a canary deployment, with the update of the Rollout CR. 3.3. Updating the rollout When you update the Rollout custom resource (CR) with modifications to the .spec.template.spec fields, for example, the container image version, then new pods are created through the ReplicaSet by using the updated container image version. Procedure Simulate the new canary version of the application by modifying the container image deployed in the rollout. In the Administrator perspective of the web console, go to Operators Installed Operators Red Hat OpenShift GitOps Rollout . Select the existing rollouts-demo rollout and modify the .spec.template.spec.containers.image value from argoproj/rollouts-demo:blue to argoproj/rollouts-demo:yellow in the YAML view. Click Save and then click Reload . The container image deployed in the rollout is modified and the rollout initiates a new canary deployment. Note As per the setWeight property defined in the .spec.strategy.canary.steps field of the Rollout CR, initially 20% of traffic to the route reaches the canary version and the rollout is paused indefinitely until a request for promotion is received. Example route with 20% of traffic directed to the canary version and rollout is paused indefinitely until a request for promotion is specified in the subsequent step spec: replicas: 5 strategy: canary: 1 steps: 2 - setWeight: 20 3 - pause: {} 4 # (...) 1 The deployment strategy that the rollout must use. 2 The steps for the rollout. This example gradually routes 20%, 40%, 60%, and 80% of traffic to the canary version. 3 The percentage of traffic that must be directed to the canary version. A value of 20 means that 20% of traffic is directed to the canary version. 4 Specification to the Argo Rollouts controller to pause indefinitely until it finds a request for promotion. Watch the progression of your rollout by running the following command: USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout CR is defined. Example output Name: rollouts-demo Namespace: spring-petclinic Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: argoproj/rollouts-demo:blue (stable) argoproj/rollouts-demo:yellow (canary) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 9m51s ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 99s canary │ └──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 98s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 9m51s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 9m50s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 9m50s ready:1/1 The rollout is now in a paused status, because there is no pause duration specified in the rollout's update strategy configuration. Repeat the step to test the newly deployed version of application and ensure that it is working as expected. For example, verify the application by interacting with the application through the browser and try running tests or observing container logs. The rollout will remain paused until you advance it to the step. After you verify that the new version of the application is working as expected, you can decide whether to continue with promotion or to abort the rollout. Accordingly, follow the instructions in "Promoting the rollout" or "Manually aborting the rollout". 3.4. Promoting the rollout Because your rollout is now in a paused status, as a cluster administrator, you must now manually promote the rollout to allow it to progress to the step. Procedure Simulate another new canary version of the application by running the following command in the Argo Rollouts CLI: USD oc argo rollouts promote rollouts-demo -n <namespace> 1 1 Specify the namespace where the Rollout resource is defined. Example output rollout 'rollouts-demo' promoted This increases the traffic weight to 40% in the canary version. Verify that the rollout progresses through the rest of the steps, by running the following command: USD oc argo rollouts get rollout rollouts-demo -n <namespace> --watch 1 1 Specify the namespace where the Rollout resource is defined. Because the rest of the steps as defined in the Rollout CR have set durations, for example, pause: {duration: 45} , the Argo Rollouts controller waits that duration and then automatically moves to the step. After all steps are completed successfully, the new ReplicaSet object is marked as the stable replica set. Example output Name: rollouts-demo Namespace: spring-petclinic Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: argoproj/rollouts-demo:yellow (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 14m ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 6m5s stable │ ├──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 6m4s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-g9kd5 Pod ✔ Running 2m4s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-2ptpp Pod ✔ Running 78s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-tmk6c Pod ✔ Running 58s ready:1/1 │ └──□ rollouts-demo-6cf78c66c5-zv6lx Pod ✔ Running 47s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet • ScaledDown 14m 3.5. Manually aborting the rollout When using a canary deployment, the rollout deploys an initial canary version of the application. You can verify it either manually or programmatically. After you verify the canary version and promote it to stable, the new stable version is made available to all users. However, sometimes bugs, errors, or deployment issues are discovered in the canary version, and you might want to abort the canary rollout and rollback to a stable version of your application. Aborting a canary rollout deletes the resources of the new canary version and restores the stable version of your application. All network traffic such as ingress, route, or virtual service that was being directed to the canary returns to the original stable version. The following example procedure deploys a new red canary version of your application, and then aborts it before it is fully promoted to stable. Procedure Update the container image version and and modify the .spec.template.spec.containers.image value from argoproj/rollouts-demo:yellow to argoproj/rollouts-demo:red by running the following command in the Argo Rollouts CLI: USD oc argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:red -n <namespace> 1 1 Specify the namespace where the Rollout custom resource (CR) is defined. Example output rollout "rollouts-demo" image updated The container image deployed in the rollout is modified and the rollout initiates a new canary deployment. Wait for the rollout to reach the paused status. Verify that the rollout deploys the rollouts-demo:red canary version and reaches the paused status by running the following command: USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout CR is defined. Example output Name: rollouts-demo Namespace: spring-petclinic Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: argoproj/rollouts-demo:red (canary) argoproj/rollouts-demo:yellow (stable) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 17m ├──# revision:3 │ └──⧉ rollouts-demo-5747959bdb ReplicaSet ✔ Healthy 75s canary │ └──□ rollouts-demo-5747959bdb-fdrsg Pod ✔ Running 75s ready:1/1 ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 9m45s stable │ ├──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 9m44s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-2ptpp Pod ✔ Running 4m58s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-tmk6c Pod ✔ Running 4m38s ready:1/1 │ └──□ rollouts-demo-6cf78c66c5-zv6lx Pod ✔ Running 4m27s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet • ScaledDown 17m Abort the update of the rollout by running the following command: USD oc argo rollouts abort rollouts-demo -n <namespace> 1 1 Specify the namespace where the Rollout CR is defined. Example output rollout 'rollouts-demo' aborted The Argo Rollouts controller deletes the canary resources of the application, and rolls back to the stable version. Verify that after aborting the rollout, now the canary ReplicaSet is scaled to 0 replicas by running the following command: USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout CR is defined. Example output Name: rollouts-demo Namespace: spring-petclinic Status: ✖ Degraded Message: RolloutAborted: Rollout aborted update to revision 3 Strategy: Canary Step: 0/8 SetWeight: 0 ActualWeight: 0 Images: argoproj/rollouts-demo:yellow (stable) Replicas: Desired: 5 Current: 5 Updated: 0 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✖ Degraded 24m ├──# revision:3 │ └──⧉ rollouts-demo-5747959bdb ReplicaSet • ScaledDown 7m38s canary ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 16m stable │ ├──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 16m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-2ptpp Pod ✔ Running 11m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-tmk6c Pod ✔ Running 11m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-zv6lx Pod ✔ Running 10m ready:1/1 │ └──□ rollouts-demo-6cf78c66c5-mlbsh Pod ✔ Running 4m47s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet • ScaledDown 24m The rollout status is marked as Degraded indicating that even though the application has rolled back to the stable version, yellow , the rollout is not currently at the wanted version, red , that was set within the .spec.template.spec.containers.image field. Note The Degraded status does not reflect the health of the application. It only indicates that there is a mismatch between the wanted and running container image versions. Update the container image version to the stable version, yellow , and modify the .spec.template.spec.containers.image value by running the following command: USD oc argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:yellow -n <namespace> 1 1 Specify the namespace where the Rollout CR is defined. Example output rollout "rollouts-demo" image updated The rollout skips the analysis and promotion steps, rolls back to the stable version, yellow , and fast-tracks the deployment of the stable ReplicaSet . Verify that the rollout status is immediately marked as Healthy by running the following command: USD oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1 1 Specify the namespace where the Rollout CR is defined. Example output Name: rollouts-demo Namespace: spring-petclinic Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: argoproj/rollouts-demo:yellow (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 63m ├──# revision:4 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 55m stable │ ├──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 55m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-2ptpp Pod ✔ Running 50m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-tmk6c Pod ✔ Running 50m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-zv6lx Pod ✔ Running 50m ready:1/1 │ └──□ rollouts-demo-6cf78c66c5-mlbsh Pod ✔ Running 44m ready:1/1 ├──# revision:3 │ └──⧉ rollouts-demo-5747959bdb ReplicaSet • ScaledDown 46m └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet • ScaledDown 63m 3.6. Additional resources Installing Red Hat OpenShift GitOps Uninstalling Red Hat OpenShift GitOps RolloutManager Custom Resource specification Argo Rollouts CLI overview Argo Rollouts tech preview limitations
[ "apiVersion: argoproj.io/v1alpha1 kind: Rollout metadata: name: rollouts-demo spec: replicas: 5 strategy: canary: 1 steps: 2 - setWeight: 20 3 - pause: {} 4 - setWeight: 40 - pause: {duration: 45} 5 - setWeight: 60 - pause: {duration: 20} - setWeight: 80 - pause: {duration: 10} revisionHistoryLimit: 2 selector: matchLabels: app: rollouts-demo template: 6 metadata: labels: app: rollouts-demo spec: containers: - name: rollouts-demo image: argoproj/rollouts-demo:blue ports: - name: http containerPort: 8080 protocol: TCP resources: requests: memory: 32Mi cpu: 5m", "oc argo rollouts list rollouts -n <namespace> 1", "NAME STRATEGY STATUS STEP SET-WEIGHT READY DESIRED UP-TO-DATE AVAILABLE rollouts-demo Canary Healthy 8/8 100 5/5 5 5 5", "apiVersion: v1 kind: Service metadata: name: rollouts-demo spec: ports: 1 - port: 80 targetPort: http protocol: TCP name: http selector: 2 app: rollouts-demo", "oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1", "Name: rollouts-demo Namespace: spring-petclinic Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: argoproj/rollouts-demo:blue (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 4m50s └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 4m50s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-bv5zf Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 4m49s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 4m49s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 4m49s ready:1/1", "oc argo rollouts status rollouts-demo -n <namespace> 1", "Healthy", "spec: replicas: 5 strategy: canary: 1 steps: 2 - setWeight: 20 3 - pause: {} 4 # (...)", "oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1", "Name: rollouts-demo Namespace: spring-petclinic Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: argoproj/rollouts-demo:blue (stable) argoproj/rollouts-demo:yellow (canary) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 9m51s ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 99s canary │ └──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 98s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet ✔ Healthy 9m51s stable ├──□ rollouts-demo-687d76d795-75k57 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-jsxg8 Pod ✔ Running 9m50s ready:1/1 ├──□ rollouts-demo-687d76d795-rsgtv Pod ✔ Running 9m50s ready:1/1 └──□ rollouts-demo-687d76d795-xrmrj Pod ✔ Running 9m50s ready:1/1", "oc argo rollouts promote rollouts-demo -n <namespace> 1", "rollout 'rollouts-demo' promoted", "oc argo rollouts get rollout rollouts-demo -n <namespace> --watch 1", "Name: rollouts-demo Namespace: spring-petclinic Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: argoproj/rollouts-demo:yellow (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 14m ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 6m5s stable │ ├──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 6m4s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-g9kd5 Pod ✔ Running 2m4s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-2ptpp Pod ✔ Running 78s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-tmk6c Pod ✔ Running 58s ready:1/1 │ └──□ rollouts-demo-6cf78c66c5-zv6lx Pod ✔ Running 47s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet • ScaledDown 14m", "oc argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:red -n <namespace> 1", "rollout \"rollouts-demo\" image updated", "oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1", "Name: rollouts-demo Namespace: spring-petclinic Status: ॥ Paused Message: CanaryPauseStep Strategy: Canary Step: 1/8 SetWeight: 20 ActualWeight: 20 Images: argoproj/rollouts-demo:red (canary) argoproj/rollouts-demo:yellow (stable) Replicas: Desired: 5 Current: 5 Updated: 1 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ॥ Paused 17m ├──# revision:3 │ └──⧉ rollouts-demo-5747959bdb ReplicaSet ✔ Healthy 75s canary │ └──□ rollouts-demo-5747959bdb-fdrsg Pod ✔ Running 75s ready:1/1 ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 9m45s stable │ ├──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 9m44s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-2ptpp Pod ✔ Running 4m58s ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-tmk6c Pod ✔ Running 4m38s ready:1/1 │ └──□ rollouts-demo-6cf78c66c5-zv6lx Pod ✔ Running 4m27s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet • ScaledDown 17m", "oc argo rollouts abort rollouts-demo -n <namespace> 1", "rollout 'rollouts-demo' aborted", "oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1", "Name: rollouts-demo Namespace: spring-petclinic Status: ✖ Degraded Message: RolloutAborted: Rollout aborted update to revision 3 Strategy: Canary Step: 0/8 SetWeight: 0 ActualWeight: 0 Images: argoproj/rollouts-demo:yellow (stable) Replicas: Desired: 5 Current: 5 Updated: 0 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✖ Degraded 24m ├──# revision:3 │ └──⧉ rollouts-demo-5747959bdb ReplicaSet • ScaledDown 7m38s canary ├──# revision:2 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 16m stable │ ├──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 16m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-2ptpp Pod ✔ Running 11m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-tmk6c Pod ✔ Running 11m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-zv6lx Pod ✔ Running 10m ready:1/1 │ └──□ rollouts-demo-6cf78c66c5-mlbsh Pod ✔ Running 4m47s ready:1/1 └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet • ScaledDown 24m", "oc argo rollouts set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:yellow -n <namespace> 1", "rollout \"rollouts-demo\" image updated", "oc argo rollouts get rollout rollouts-demo --watch -n <namespace> 1", "Name: rollouts-demo Namespace: spring-petclinic Status: ✔ Healthy Strategy: Canary Step: 8/8 SetWeight: 100 ActualWeight: 100 Images: argoproj/rollouts-demo:yellow (stable) Replicas: Desired: 5 Current: 5 Updated: 5 Ready: 5 Available: 5 NAME KIND STATUS AGE INFO ⟳ rollouts-demo Rollout ✔ Healthy 63m ├──# revision:4 │ └──⧉ rollouts-demo-6cf78c66c5 ReplicaSet ✔ Healthy 55m stable │ ├──□ rollouts-demo-6cf78c66c5-zrgd4 Pod ✔ Running 55m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-2ptpp Pod ✔ Running 50m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-tmk6c Pod ✔ Running 50m ready:1/1 │ ├──□ rollouts-demo-6cf78c66c5-zv6lx Pod ✔ Running 50m ready:1/1 │ └──□ rollouts-demo-6cf78c66c5-mlbsh Pod ✔ Running 44m ready:1/1 ├──# revision:3 │ └──⧉ rollouts-demo-5747959bdb ReplicaSet • ScaledDown 46m └──# revision:1 └──⧉ rollouts-demo-687d76d795 ReplicaSet • ScaledDown 63m" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/argo_rollouts/getting-started-with-argo-rollouts
Chapter 3. LocalSubjectAccessReview [authorization.openshift.io/v1]
Chapter 3. LocalSubjectAccessReview [authorization.openshift.io/v1] Description LocalSubjectAccessReview is an object for requesting information about whether a user or group can perform an action in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL user groups scopes 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update groups array (string) Groups is optional. Groups is the list of groups to which the User belongs. isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil for a self-SAR, means "use the scopes on this request". Nil for a regular SAR, means the same as empty. user string User is optional. If both User and Groups are empty, the current authenticated user is used. verb string Verb is one of: get, list, watch, create, update, delete 3.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/localsubjectaccessreviews POST : create a LocalSubjectAccessReview 3.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/localsubjectaccessreviews Table 3.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a LocalSubjectAccessReview Table 3.2. Body parameters Parameter Type Description body LocalSubjectAccessReview schema Table 3.3. HTTP responses HTTP code Reponse body 200 - OK LocalSubjectAccessReview schema 201 - Created LocalSubjectAccessReview schema 202 - Accepted LocalSubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authorization_apis/localsubjectaccessreview-authorization-openshift-io-v1
36.3. Downloading the Upgraded Kernel
36.3. Downloading the Upgraded Kernel There are several ways to determine if an updated kernel is available for the system. Security Errata - Go to the following location for information on security errata, including kernel upgrades that fix security issues: Via Quarterly Updates - Refer to the following location for details: Via Red Hat Network - Download and install the kernel RPM packages. Red Hat Network can download the latest kernel, upgrade the kernel on the system, create an initial RAM disk image if needed, and configure the boot loader to boot the new kernel. For more information, refer to http://www.redhat.com/docs/manuals/RHNetwork/ . If Red Hat Network was used to download and install the updated kernel, follow the instructions in Section 36.5, "Verifying the Initial RAM Disk Image" and Section 36.6, "Verifying the Boot Loader" , only do not change the kernel to boot by default. Red Hat Network automatically changes the default kernel to the latest version. To install the kernel manually, continue to Section 36.4, "Performing the Upgrade" .
[ "http://www.redhat.com/apps/support/errata/", "http://www.redhat.com/apps/support/errata/rhlas_errata_policy.html" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/manually_upgrading_the_kernel-downloading_the_upgraded_kernel
Chapter 5. Additional resources about RPM packaging
Chapter 5. Additional resources about RPM packaging This section provides references to various topics related to RPMs, RPM packaging, and RPM building. Some of these are advanced and extend the introductory material included in this documentation. Red Hat Software Collections Overview - The Red Hat Software Collections offering provides continuously updated development tools in latest stable versions. Red Hat Software Collections - The Packaging Guide provides an explanation of Software Collections and details how to build and package them. Developers and system administrators with basic understanding of software packaging with RPM can use this Guide to get started with Software Collections. Mock - Mock provides a community-supported package building solution for various architectures and different Fedora or RHEL versions than has the build host. RPM Documentation - The official RPM documentation. Fedora Packaging Guidelines - The official packaging guidelines for Fedora, useful for all RPM-based distributions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/rpm_packaging_guide/additional-resources-about-rpm-packaging
probe::json_data
probe::json_data Name probe::json_data - Fires whenever JSON data is wanted by a reader. Synopsis json_data Values None Context This probe fires when the JSON data is about to be read. This probe must gather up data and then call the following macros to output the data in JSON format. First, @ json_output_data_start must be called. That call is followed by one or more of the following (one call for each data item): @ json_output_string_value , @ json_output_numeric_value , @ json_output_array_string_value , and @ json_output_array_numeric_value . Finally @ json_output_data_end must be called.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-json-data
Chapter 6. Formatting Hammer output
Chapter 6. Formatting Hammer output You can modify the default formatting of the output of hammer commands to simplify the processing of this output by other command line tools and applications. For example, to list organizations in a CSV format with a custom separator (in this case a semicolon), use the following command: Output in CSV format is useful for example when you need to parse IDs and use them in a for loop. Several other formatting options are available with the --output option: Replace output_format with one of: table - generates output in the form of a human readable table (default). base - generates output in the form of key-value pairs. yaml - generates output in the YAML format. csv - generates output in the Comma Separated Values format. To define a custom separator, use the --csv and --csv-separator options instead. json - generates output in the JavaScript Object Notation format. silent - suppresses the output.
[ "hammer --csv --csv-separator \";\" organization list", "hammer --output output_format organization list" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/formatting-hammer-output
Chapter 6. Assessing system-upgrade readiness with the pre-upgrade analysis task
Chapter 6. Assessing system-upgrade readiness with the pre-upgrade analysis task This task is a component of the in-place upgrade capability for Red Hat Enterprise Linux using the Leapp tool. For more information about the Leapp tool and using it to check upgrade readiness manually, see Upgrading from RHEL 8 to RHEL 9, Instructions for an in-place upgrade from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 . The pre-upgrade analysis task checks the readiness of systems to upgrade from Red Hat Enterprise Linux (RHEL) 8 to RHEL 9. If Insights detects upgrade-blocking issues, you can see more information about the issues, including steps to resolve them, in Insights for Red Hat Enterprise Linux on the Red Hat Hybrid Cloud Console (Console). The pre-upgrade analysis task can run on any RHEL 8 system that is connected to Red Hat Insights using the remote host configuration (rhc) solution. You can verify that your system is connected to Insights by locating it in the Insights system inventory on the Console. If the system is not in the inventory, see Remote Host Configuration and Management documentation for information about connecting systems to Insights. You can also run the Leapp utility manually on systems. When an Insights-connected system has a Leapp report in its archive, whether the utility was run manually or as an Insights task, you can see results from the report in Insights. 6.1. Requirements and prerequisites The following requirements and prerequisites apply to the pre-upgrade analysis task: This guide assumes that you have read and understood the in-place upgrade documentation before attempting to perform any upgrade-related action using Red Hat Insights. Your systems must be eligible for in-place upgrade. See in-place upgrade documentation for system requirements and limitations. Your RHEL system must be connected to Red Hat Insights using the remote host configuration solution in order to execute Insights tasks and other remediation playbooks from the Insights for Red Hat Enterprise Linux UI. For more information, see Remote Host Configuration and Management documentation. You are logged into the console.redhat.com with Tasks administrator privileges granted in User Access. Note All members of the Default admin access group have Tasks administrator access. If you are not a member of a User Access group with this role, you will not see any tasks on the Tasks page. For more information about User Access, including how to request greater access to Insights features, see User Access Configuration Guide for Role-based Access Control (RBAC) . 6.2. Running the pre-upgrade analysis task Use the following procedure to analyze the readiness of RHEL systems for upgrading from RHEL 8 to RHEL 9. Prerequisites Prerequisites are listed in the Requirements and prerequisites section of this chapter. Procedure Go to the Red Hat Hybrid Cloud Console > Red Hat Insights > RHEL > Automation Toolkit > Tasks . Locate the Pre-upgrade analysis for in-place upgrade from RHEL 8 task. Note If you can not see any tasks on the page, you might not have adequate User Access. See User Access Configuration Guide for Role-based Access Control (RBAC) for more information. Optional: You can view details of the pre-upgrade analysis utility by clicking Download preview of playbook . Click Run task. On the Pre-upgrade analysis for in-place upgrade from RHEL 8 popup, select systems on which to run the pre-upgrade analysis by checking the box to each system. Note By default, the list of systems is filtered to only display systems that are eligible to run the task. You can change or add filters to expand the parameters of included systems from your inventory. Click Execute task to run the task on the selected systems. Verification Use the following procedure to verify that a task has been executed successfully. Go to the Red Hat Hybrid Cloud Console > Red Hat Insights > RHEL > Automation Toolkit > Tasks page and click the Activity tab. The status of tasks, whether they are in progress or have been completed, can be viewed here. Locate your task based on the run date and time. You can see whether the task completed or failed. 6.3. Reviewing the pre-upgrade analysis task report After executing the pre-upgrade analysis task on systems, you can review specific details and upgrade-inhibiting recommendations for each system. Prerequisites Prerequisites are listed in the Requirements and prerequisites section of this chapter. Procedure Go to the Red Hat Hybrid Cloud Console > Red Hat Insights > RHEL > Automation Toolkit > Tasks and click the Activity tab. Click on the task name to view the results of a task. Note the run date and time so that you select the correct report. Click on the carat to the system name to view a list of alerts for that system. View information about upgrade-inhibiting alerts by clicking on the carat to an alert with a white exclamation mark inside of a red dot, accompanying red alert text. Note In addition to the inhibitor alerts, you might also see lower severity and informative alerts that do not require remediation in order for the upgrade to proceed. Review the report thoroughly. While some recommendations may be informational, it is crucial to take action if you encounter any errors or warnings. In the event of such issues, address them on your systems and re-run the pre-upgrade task to assess the impact of your remediation efforts. Note Certain errors are classified as official inhibitors, and proceeding with the upgrade is not possible until these are remediated. 6.4. Viewing upgrade-inhibiting recommendations After running the pre-upgrade analysis task, or manually running the Leapp tool on individual systems, you can view a list of recommendations for upgrade-inhibiting issues in your infrastructure. Using the list of pre-upgrade recommendations, you can view the following information about each recommendation: Recommendation details Affected-system information Total risk and impact insights Risk to system availability during resolution actions Prerequisites Any user with default access (the default for every user) can view the list of in-place upgrade recommendations. Procedure Go to Red Hat Insights > Operations > Advisor > Topics > In-place upgrade to view recommendations affecting the success of in-place upgrades. Note Currently, the in-place upgrade recommendations list only shows recommendations that Insights has identified as upgrade inhibitors. All in-place upgrade recommendations, including non-inhibitors, can be seen in the detailed view of each executed task. 6.5. Remediating upgrade-inhibiting recommendations You can use the in-place upgrade recommendations list as a basis for remediating upgrade-inhibiting issues on systems in your infrastructure. Some recommendations have a playbook available for automating the execution of remediations directly from the Insights for Red Hat Enterprise Linux UI. However, some recommendations require manual resolutions, the steps of which are customized for the system and recommendation pair, and are provided with the recommendation. You can tell which recommendations have playbooks available by viewing the Remediation column in the list of recommendations. For more information about Insights remediations, see the Red Hat Insights Remediations Guide . 6.5.1. Using Insights remediation playbooks to resolve RHEL upgrade-inhibiting recommendations You can automate the remediation of upgrade-inhibiting recommendations using Ansible Playbooks that you create in Insights. Use the following procedure to locate your inhibitor issues and select recommendations and systems to remediate. Prerequisites Prerequisites are listed in the Requirements and prerequisites section of this chapter. Procedure Go to Red Hat Insights > Operations > Advisor > Topics > In-place upgrade to view recommendations affecting the success of in-place upgrades. Choose a recommendation with the word "Playbook" in the Remediation tab, which indicates issues that have a playbook available. For each recommendation with an available playbook , take the following actions: Click on the recommendation to see more information about the issue, including the systems that are affected. Check the box to each system you want to add to the playbook and click Remediate . In the popup, select Create a new playbook and enter a name for the playbook, then click . Optional: Alternatively, you can add the resolution for the selected systems to an existing playbook. Review the included systems and click . Review the included recommendation. You can click the carat to the recommendation name to see included systems. Important Some resolutions require the system to reboot. Auto reboot is not enabled by default but you can enable it by clicking Turn on autoreboot above the list of recommendations. Click Submit . The final popup view confirms that the playbook was created successfully. You can select to return to the application or open the playbook. Find the playbook in Automation Toolkit > Remediations and click on it to open it. The playbook includes a list of actions. Select the actions to execute by checking the box to each one. Click Execute playbook to run the playbook on the specified systems. On the popup, click on the Execute playbook on systems button. The playbook runs on those systems. 6.5.2. Remediating RHEL upgrade-inhibiting recommendations manually You can remediate upgrade-inhibiting recommendations by manually applying resolution steps on affected systems. The following procedure shows how to find the resolution steps for a system and recommendation pairing. Prerequisites Prerequisites are listed in the Requirements and prerequisites section of this chapter. Procedure Go to Red Hat Insights > Operations > Advisor > Topics > In-place upgrade to view recommendations affecting the success of in-place upgrades. Choose a recommendation with the word "Manual" in the *Remediation tab, which indicates that the issue requires manual remediation. For each recommendation requiring a manual remediation, take the following actions: Click on the recommendation to open the recommendation-details page, which shows affected systems. Click on a system name. Pick a recommendation to resolve manually and click on the carat to view the Steps to resolve the recommendation on the system. Perform the resolution steps on the system. Repeat steps b, c, and d for each affected system.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks/pre-upgrade-analysis-task_overview-tasks
Configure data sources
Configure data sources Red Hat build of Quarkus 3.8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/configure_data_sources/index
Chapter 4. Managing applications that show in the dashboard
Chapter 4. Managing applications that show in the dashboard 4.1. Adding an application to the dashboard If you have installed an application in your OpenShift cluster, you can add a tile for that application to the OpenShift AI dashboard (the Applications Enabled page) to make it accessible for OpenShift AI users. Prerequisites You have cluster administrator privileges for your OpenShift cluster. The dashboard configuration enablement option is set to true (the default). Note that a cluster administrator can disable this ability as described in Preventing users from adding applications to the dashboard . Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Home API Explorer . On the API Explorer page, search for the OdhApplication kind. Click the OdhApplication kind to open the resource details page. On the OdhApplication details page, select the redhat-ods-applications project from the Project list. Click the Instances tab. Click Create OdhApplication . On the Create OdhApplication page, copy the following code and paste it into the YAML editor. apiVersion: dashboard.opendatahub.io/v1 kind: OdhApplication metadata: name: examplename namespace: redhat-ods-applications labels: app: odh-dashboard app.kubernetes.io/part-of: odh-dashboard spec: enable: validationConfigMap: examplename-enable img: >- <svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="path data" fill="#ee0000"/> </svg> getStartedLink: 'https://example.org/docs/quickstart.html' route: exampleroutename routeNamespace: examplenamespace displayName: Example Name kfdefApplications: [] support: third party support csvName: '' provider: example docsLink: 'https://example.org/docs/index.html' quickStart: '' getStartedMarkDown: >- # Example Enter text for the information panel. description: >- Enter summary text for the tile. category: Self-managed | Partner managed | {org-name} managed Modify the parameters in the code for your application. Tip To see example YAML files, click Home API Explorer , select OdhApplication , click the Instances tab, select an instance, and then click the YAML tab. Click Create . The application details page appears. Log in to OpenShift AI. In the left menu, click Applications Explore . Locate the new tile for your application and click it. In the information pane for the application, click Enable . Verification In the left menu of the OpenShift AI dashboard, click Applications Enabled and verify that your application is available. 4.2. Preventing users from adding applications to the dashboard By default, OpenShift AI administrators can add applications to the OpenShift AI dashboard Application Enabled page. As a cluster administrator, you can disable the ability for OpenShift AI administrators to add applications to the dashboard. Note: The Jupyter tile is enabled by default. To disable it, see Hiding the default Jupyter application . Prerequisite You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. Open the dashboard configuration file: In the Administrator perspective, click Home API Explorer . In the search bar, enter OdhDashboardConfig to filter by kind. Click the OdhDashboardConfig custom resource (CR) to open the resource details page. Select the redhat-ods-applications project from the Project list. Click the Instances tab. Click the odh-dashboard-config instance to open the details page. Click the YAML tab. In the spec:dashboardConfig section, set the value of enablement to false to disable the ability for dashboard users to add applications to the dashboard. Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster. Verification Open the OpenShift AI dashboard Application Enabled page. 4.3. Disabling applications connected to OpenShift AI You can disable applications and components so that they do not appear on the OpenShift AI dashboard when you no longer want to use them, for example, when data scientists no longer use an application or when the application license expires. Disabling unused applications allows your data scientists to manually remove these application tiles from their OpenShift AI dashboard so that they can focus on the applications that they are most likely to use. See Removing disabled applications from the dashboard for more information about manually removing application tiles. Important Do not follow this procedure when disabling Red Hat OpenShift API Management. You can only uninstall Red Hat OpenShift API Management from OpenShift Cluster Manager. Prerequisites You have logged in to the OpenShift web console. You are part of the cluster-admins or dedicated-admins user group in your OpenShift cluster. The dedicated-admins user group applies only to OpenShift Dedicated. You have installed or configured the service on your OpenShift cluster. The application or component that you want to disable is enabled and appears on the Enabled page. Procedure In the OpenShift web console, switch to the Administrator perspective. Switch to the redhat-ods-applications project. Click Operators Installed Operators . Click on the Operator that you want to uninstall. You can enter a keyword into the Filter by name field to help you find the Operator faster. Delete any Operator resources or instances by using the tabs in the Operator interface. During installation, some Operators require the administrator to create resources or start process instances using tabs in the Operator interface. These must be deleted before the Operator can uninstall correctly. On the Operator Details page, click the Actions drop-down menu and select Uninstall Operator . An Uninstall Operator? dialog box is displayed. Select Uninstall to uninstall the Operator, Operator deployments, and pods. After this is complete, the Operator stops running and no longer receives updates. Important Removing an Operator does not remove any custom resource definitions or managed resources for the Operator. Custom resource definitions and managed resources still exist and must be cleaned up manually. Any applications deployed by your Operator and any configured off-cluster resources continue to run and must be cleaned up manually. Verification The Operator is uninstalled from its target clusters. The Operator no longer appears on the Installed Operators page. The disabled application is no longer available for your data scientists to use, and is marked as Disabled on the Enabled page of the OpenShift AI dashboard. This action may take a few minutes to occur following the removal of the Operator. 4.4. Showing or hiding information about enabled applications If you have installed another application in your OpenShift cluster, you can add a tile for that application to the OpenShift AI dashboard (the Applications Enabled page) to make it accessible for OpenShift AI users. Prerequisites You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. In the Administrator perspective, click Home API Explorer . On the API Explorer page, search for the OdhApplication kind. Click the OdhApplication kind to open the resource details page. On the OdhApplication details page, select the redhat-ods-applications project from the Project list. Click the Instances tab. Click Create OdhApplication . On the Create OdhApplication page, copy the following code and paste it into the YAML editor. apiVersion: dashboard.opendatahub.io/v1 kind: OdhApplication metadata: name: examplename namespace: redhat-ods-applications labels: app: odh-dashboard app.kubernetes.io/part-of: odh-dashboard spec: enable: validationConfigMap: examplename-enable img: >- <svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="path data" fill="#ee0000"/> </svg> getStartedLink: 'https://example.org/docs/quickstart.html' route: exampleroutename routeNamespace: examplenamespace displayName: Example Name kfdefApplications: [] support: third party support csvName: '' provider: example docsLink: 'https://example.org/docs/index.html' quickStart: '' getStartedMarkDown: >- # Example Enter text for the information panel. description: >- Enter summary text for the tile. category: Self-managed | Partner managed | Red Hat managed Modify the parameters in the code for your application. Tip To see example YAML files, click Home API Explorer , select OdhApplication , click the Instances tab, select an instance, and then click the YAML tab. Click Create . The application details page appears. Log in to OpenShift AI. In the left menu, click Applications Explore . Locate the new tile for your application and click it. In the information pane for the application, click Enable . Verification In the left menu of the OpenShift AI dashboard, click Applications Enabled and verify that your application is available. 4.5. Hiding the default Jupyter application The OpenShift AI dashboard includes Jupyter as an enabled application by default. To hide the Jupyter tile from the list of Enabled applications, edit the dashboard configuration file. Prerequisite You have cluster administrator privileges for your OpenShift cluster. Procedure Log in to the OpenShift console as a cluster administrator. Open the dashboard configuration file: In the Administrator perspective, click Home API Explorer . In the search bar, enter OdhDashboardConfig to filter by kind. Click the OdhDashboardConfig custom resource (CR) to open the resource details page. Select the redhat-ods-applications project from the Project list. Click the Instances tab. Click the odh-dashboard-config instance to open the details page. Click the YAML tab. In the spec:notebookController section, set the value of enabled to false to hide the Jupyter tile from the list of Enabled applications. Click Save to apply your changes and then click Reload to make sure that your changes are synced to the cluster. Verification In the OpenShift AI dashboard, select Applications> Enabled . You should not see the Jupyter tile.
[ "apiVersion: dashboard.opendatahub.io/v1 kind: OdhApplication metadata: name: examplename namespace: redhat-ods-applications labels: app: odh-dashboard app.kubernetes.io/part-of: odh-dashboard spec: enable: validationConfigMap: examplename-enable img: >- <svg width=\"24\" height=\"25\" viewBox=\"0 0 24 25\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\"> <path d=\"path data\" fill=\"#ee0000\"/> </svg> getStartedLink: 'https://example.org/docs/quickstart.html' route: exampleroutename routeNamespace: examplenamespace displayName: Example Name kfdefApplications: [] support: third party support csvName: '' provider: example docsLink: 'https://example.org/docs/index.html' quickStart: '' getStartedMarkDown: >- # Example Enter text for the information panel. description: >- Enter summary text for the tile. category: Self-managed | Partner managed | {org-name} managed", "apiVersion: dashboard.opendatahub.io/v1 kind: OdhApplication metadata: name: examplename namespace: redhat-ods-applications labels: app: odh-dashboard app.kubernetes.io/part-of: odh-dashboard spec: enable: validationConfigMap: examplename-enable img: >- <svg width=\"24\" height=\"25\" viewBox=\"0 0 24 25\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\"> <path d=\"path data\" fill=\"#ee0000\"/> </svg> getStartedLink: 'https://example.org/docs/quickstart.html' route: exampleroutename routeNamespace: examplenamespace displayName: Example Name kfdefApplications: [] support: third party support csvName: '' provider: example docsLink: 'https://example.org/docs/index.html' quickStart: '' getStartedMarkDown: >- # Example Enter text for the information panel. description: >- Enter summary text for the tile. category: Self-managed | Partner managed | Red Hat managed" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_openshift_ai/managing-applications-that-show-in-the-dashboard
3. We Need Feedback!
3. We Need Feedback! If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Enterprise Linux 6 and the component doc-Global_File_System_2 . When submitting a bug report, be sure to mention the manual's identifier: If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, include the section number and some of the surrounding text so we can find it easily.
[ "rh-gfs2(EN)-6 (2017-3-8T15:15)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/sect-RedHat-We_Need_Feedback
12.4. Memcached Interface Security
12.4. Memcached Interface Security 12.4.1. Publish Memcached Endpoints as a Public Interface Red Hat JBoss Data Grid's memcached server operates as a management interface by default. To extend its operations to a public interface, alter the value of the interface parameter in the socket-binding element from management to public as follows: Report a bug
[ "<socket-binding name=\"memcached\" interface=\"public\" port=\"11211\" />" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-memcached_interface_security
5.4. Booting into Emergency Mode
5.4. Booting into Emergency Mode In emergency mode, you are booted into the most minimal environment possible. The root file system is mounted read-only and almost nothing is set up. The main advantage of emergency mode over single-user mode is that the init files are not loaded. If init is corrupted or not working, you can still mount file systems to recover data that could be lost during a re-installation. To boot into emergency mode, use the same method as described for single-user mode in Section 5.3, "Booting into Single-User Mode" with one exception, replace the keyword single with the keyword emergency .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-rescuemode-booting-emergency
16.3. USB Devices
16.3. USB Devices This section gives the commands required for handling USB devices. 16.3.1. Assigning USB Devices to Guest Virtual Machines Most devices such as web cameras, card readers, disk drives, keyboards, mice are connected to a computer using a USB port and cable. There are two ways to pass such devices to a guest virtual machine: Using USB passthrough - this requires the device to be physically connected to the host physical machine that is hosting the guest virtual machine. SPICE is not needed in this case. USB devices on the host can be passed to the guest in the command line or virt-manager . See Section 19.3.2, "Attaching USB Devices to a Guest Virtual Machine" for virt manager directions. Note that the virt-manager directions are not suitable for hot plugging or hot unplugging devices. If you want to hot plug/or hot unplug a USB device, see Procedure 20.4, "Hot plugging USB devices for use by the guest virtual machine" . Using USB re-direction - USB re-direction is best used in cases where there is a host physical machine that is running in a data center. The user connects to his/her guest virtual machine from a local machine or thin client. On this local machine there is a SPICE client. The user can attach any USB device to the thin client and the SPICE client will redirect the device to the host physical machine on the data center so it can be used by the guest virtual machine that is running on the thin client. For instructions via the virt-manager see Section 19.3.3, "USB Redirection" . 16.3.2. Setting a Limit on USB Device Redirection To filter out certain devices from redirection, pass the filter property to -device usb-redir . The filter property takes a string consisting of filter rules, the format for a rule is: Use the value -1 to designate it to accept any value for a particular field. You may use multiple rules on the same command line using | as a separator. Note that if a device matches none of the passed in rules, redirecting it will not be allowed! Example 16.1. An example of limiting redirection with a guest virtual machine Prepare a guest virtual machine. Add the following code excerpt to the guest virtual machine's' domain XML file: Start the guest virtual machine and confirm the setting changes by running the following: Plug a USB device into a host physical machine, and use virt-manager to connect to the guest virtual machine. Click USB device selection in the menu, which will produce the following message: "Some USB devices are blocked by host policy". Click OK to confirm and continue. The filter takes effect. To make sure that the filter captures properly check the USB device vendor and product, then make the following changes in the host physical machine's domain XML to allow for USB redirection. Restart the guest virtual machine, then use virt-viewer to connect to the guest virtual machine. The USB device will now redirect traffic to the guest virtual machine.
[ "<class>:<vendor>:<product>:<version>:<allow>", "<redirdev bus='usb' type='spicevmc'> <alias name='redir0'/> <address type='usb' bus='0' port='3'/> </redirdev> <redirfilter> <usbdev class='0x08' vendor='0x1234' product='0xBEEF' version='2.0' allow='yes'/> <usbdev class='-1' vendor='-1' product='-1' version='-1' allow='no'/> </redirfilter>", "ps -ef | grep USDguest_name", "-device usb-redir,chardev=charredir0,id=redir0, / filter=0x08:0x1234:0xBEEF:0x0200:1|-1:-1:-1:-1:0,bus=usb.0,port=3", "<redirfilter> <usbdev class='0x08' vendor='0x0951' product='0x1625' version='2.0' allow='yes'/> <usbdev allow='no'/> </redirfilter>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_device_configuration-usb_devices
Chapter 6. Configure storage for OpenShift Container Platform services
Chapter 6. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 6.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 6.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can also the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 6.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 6.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 6.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 6.3. Persistent Volume Claims attached to prometheus-k8s-* pod 6.4. Overprovision level policy control Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 6.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 6.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 6.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 6.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.
[ "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'", "oc describe noobaa", "oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d", "oc describe pod <image-registry-name>", "oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>", "oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>", "apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]", "oc get clusterresourcequota -A oc describe clusterresourcequota -A", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/configure-storage-for-openshift-container-platform-services_rhodf
Chapter 5. Using your own certificate authority bundle
Chapter 5. Using your own certificate authority bundle You can bring your organization's certificate authority (CA) bundle for signing and verifying your build artifacts with Red Hat's Trusted Artifact Signer (RHTAS) service. Prerequisites Installation of the RHTAS operator running on Red Hat OpenShift Container Platform. A running Securesign instance. Your CA root certificate. A workstation with the oc binary installed. Procedure Log in to OpenShift from the command line: Syntax oc login --token= TOKEN --server= SERVER_URL_AND_PORT Example USD oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443 Note You can find your login token and URL for use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command . Offer your user name and password again, if asked, and click Display Token to view the command. Switch to the RHTAS project: Example USD oc project trusted-artifact-signer Create a new ConfigMap by using your organization's CA root certificate bundle: Example USD oc create configmap custom-ca-bundle --from-file=ca-bundle.crt Important The certificate filename must be ca-bundle.crt . Open the Securesign resource for editing: Example USD oc edit Securesign securesign-sample Add the rhtas.redhat.com/trusted-ca under the metadata.annotations section: Example apiVersion: rhtas.redhat.com/v1alpha1 kind: Securesign metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec: ... Save, and quit the editor. Open the Fulcio resource for editing: Example USD oc edit Fulcio securesign-sample Add the rhtas.redhat.com/trusted-ca under the metadata.annotations section: Example apiVersion: rhtas.redhat.com/v1alpha1 kind: Fulcio metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec: ... Save, and quit the editor. Wait for the RHTAS operator to reconfigure before signing and verifying artifacts.
[ "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "oc project trusted-artifact-signer", "oc create configmap custom-ca-bundle --from-file=ca-bundle.crt", "oc edit Securesign securesign-sample", "apiVersion: rhtas.redhat.com/v1alpha1 kind: Securesign metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec:", "oc edit Fulcio securesign-sample", "apiVersion: rhtas.redhat.com/v1alpha1 kind: Fulcio metadata: name: example-instance annotations: rhtas.redhat.com/trusted-ca: custom-ca-bundle spec:" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/administration_guide/using-your-own-ca-bundle_admin
Part II. Migrating to IdM from external sources
Part II. Migrating to IdM from external sources
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/migrating_to_identity_management_on_rhel_8/migrating-to-idm-from-external-sources
4.2. Starting luci
4.2. Starting luci Note Using luci to configure a cluster requires that ricci be installed and running on the cluster nodes, as described in Section 3.13, "Considerations for ricci " . As noted in that section, using ricci requires a password which luci requires you to enter for each cluster node when you create a cluster, as described in Section 4.4, "Creating a Cluster" . Before starting luci , ensure that the IP ports on your cluster nodes allow connections to port 11111 from the luci server on any nodes that luci will be communicating with. For information on enabling IP ports on cluster nodes, see Section 3.3.1, "Enabling IP Ports on Cluster Nodes" . To administer Red Hat High Availability Add-On with Conga , install and run luci as follows: Select a computer to host luci and install the luci software on that computer. For example: Note Typically, a computer in a server cage or a data center hosts luci ; however, a cluster computer can host luci . Start luci using service luci start . For example: Note As of Red Hat Enterprise Linux release 6.1, you can configure some aspects of luci 's behavior by means of the /etc/sysconfig/luci file, including the port and host parameters, as described in Section 3.4, "Configuring luci with /etc/sysconfig/luci " . Modified port and host parameters will automatically be reflected in the URL displayed when the luci service starts. At a Web browser, place the URL of the luci server into the URL address box and click Go (or the equivalent). The URL syntax for the luci server is https:// luci_server_hostname : luci_server_port . The default value of luci_server_port is 8084 . The first time you access luci , a web browser specific prompt regarding the self-signed SSL certificate (of the luci server) is displayed. Upon acknowledging the dialog box or boxes, your Web browser displays the luci login page. Any user able to authenticate on the system that is hosting luci can log in to luci . As of Red Hat Enterprise Linux 6.2 only the root user on the system that is running luci can access any of the luci components until an administrator (the root user or a user with administrator permission) sets permissions for that user. For information on setting luci permissions for users, see Section 4.3, "Controlling Access to luci" . Logging in to luci displays the luci Homebase page, as shown in Figure 4.1, "luci Homebase page" . Figure 4.1. luci Homebase page Note There is an idle timeout for luci that logs you out after 15 minutes of inactivity.
[ "yum install luci", "service luci start Starting luci: generating https SSL certificates... done [ OK ] Please, point your web browser to https://nano-01:8084 to access luci" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-start-luci-ricci-conga-CA
Chapter 8. Multicloud Object Gateway bucket replication
Chapter 8. Multicloud Object Gateway bucket replication Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (AWS S3, Azure, and so on). A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication. Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway, see link:Accessing the Multicloud Object Gateway with your applications. Download the Multicloud Object Gateway (MCG) command-line interface: Important Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command: Alternatively, you can install the mcg package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Important Choose the correct Product Variant according to your architecture. Note Certain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG's features. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC). You must define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: 8.1.2. Replicating a bucket to another bucket using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC) or you can edit the YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class. You must define the replication-policy parameter in a JSON file. You can set a bucket class replication policy for the Placement and Namespace bucket classes. You can set a bucket class replication policy for the Placement and Namespace bucket classes. Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. You can pass many backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . For example: This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class or you can edit their YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. You can pass many backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . 8.3. Enabling log based bucket replication When creating a bucket replication policy, you can use logs so that recent data is replicated more quickly, while the default scan-based replication works on replicating the rest of the data. Important This feature requires setting up bucket logs on AWS or Azure.For more information about setting up AWS logs, see Enabling Amazon S3 server access logging . The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket. Note This feature is only supported in buckets that are backed by a NamespaceStore. Buckets backed by BackingStores cannot utilized log-based replication. 8.3.1. Enabling log based bucket replication for new namespace buckets using OpenShift Web Console in Amazon Web Service environment You can optimize replication by using the event logs of the Amazon Web Service(AWS) cloud environment. You enable log based bucket replication for new namespace buckets using the web console during the creation of namespace buckets. Prerequisites Ensure that object logging is enabled in AWS. For more information, see the "Using the S3 console" section in Enabling Amazon S3 server access logging . Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage Object Storage Object Bucket Claims . Click Create ObjectBucketClaim . Enter the name of ObjectBucketName and select StorageClass and BucketClass. Select the Enable replication check box to enable replication. In the Replication policy section, select the Optimize replication using event logs checkbox. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix. 8.3.2. Enabling log based bucket replication for existing namespace buckets using YAML You can enable log based bucket replication for the existing buckets that are created using the command line interface or by applying an YAML, and not the buckets that are created using AWS S3 commands. Procedure Edit the YAML of the bucket's OBC to enable log based bucket replication. Add the following under spec : Note It is also possible to add this to the YAML of an OBC before it is created. rule_id Specify an ID of your choice for identifying the rule destination_bucket Specify the name of the target MCG bucket that the objects are copied to (optional) {"filter": {"prefix": <>}} Specify a prefix string that you can set to filter the objects that are replicated log_replication_info Specify an object that contains data related to log-based replication optimization. {"logs_location": {"logs_bucket": <>}} is set to the location of the AWS S3 server access logs. 8.3.3. Enabling log based bucket replication in Microsoft Azure Prerequisites Refer to Microsoft Azure documentation and ensure that you have completed the following tasks in the Microsoft Azure portal: Ensure that have created a new application and noted down the name, application (client) ID, and directory (tenant) ID. For information, see Register an application . Ensure that a new a new client secret is created and the application secret is noted down. Ensure that a new Log Analytics workspace is created and its name and workspace ID is noted down. For information, see Create a Log Analytics workspace . Ensure that the Reader role is assigned under Access control and members are selected and the name of the application that you registered in the step is provided. For more information, see Assign Azure roles using the Azure portal . Ensure that a new storage account is created and the Access keys are noted down. In the Monitoring section of the storage account created, select a blob and in the Diagnostic settings screen, select only StorageWrite and StorageDelete , and in the destination details add the Log Analytics workspace that you created earlier. Ensure that a blob is selected in the Diagnostic settings screen of the Monitoring section of the storage account created. Also, ensure that only StorageWrite and StorageDelete is selected and in the destination details, the Log Analytics workspace that you created earlier is added. For more information, see Diagnostic settings in Azure Monitor . Ensure that two new containers for object source and object destination are created. Administrator access to OpenShift Web Console. Procedure Create a secret with credentials to be used by the namespacestores . Create a NamespaceStore backed by a container created in Azure. For more information, see Adding a namespace bucket using the OpenShift Container Platform user interface . Create a new Namespace-Bucketclass and OBC that utilizes it. Check the object bucket name by looking in the YAML of target OBC, or by listing all S3 buckets, for example, - s3 ls . Use the following template to apply an Azure replication policy on your source OBC by adding the following in its YAML, under .spec : sync_deletion Specify a boolean value, true or false . destination_bucket Make sure to use the name of the object bucket, and not the claim. The name can be retrieved using the s3 ls command, or by looking for the value in an OBC's YAML. Verification steps Write objects to the source bucket. Wait until MCG replicates them. Delete the objects from the source bucket. Verify the objects were removed from the target bucket. 8.3.4. Enabling log-based bucket replication deletion Prerequisites Administrator access to OpenShift Web Console. AWS Server Access Logging configured for the desired bucket. Procedure In the OpenShift Web Console, navigate to Storage Object Storage Object Bucket Claims . Click Create new Object bucket claim . (Optional) In the Replication rules section, select the Sync deletion checkbox for each rule separately. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix.
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replicationPolicy: {\"rules\": [{ \"rule_id\": \"\", \"destination_bucket\": \"\", \"filter\": {\"prefix\": \"\"}}]}", "noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]", "replicationPolicy: '{\"rules\":[{\"rule_id\":\"<RULE ID>\", \"destination_bucket\":\"<DEST>\", \"filter\": {\"prefix\": \"<PREFIX>\"}}], \"log_replication_info\": {\"logs_location\": {\"logs_bucket\": \"<LOGS_BUCKET>\"}}}'", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: TenantID: <AZURE TENANT ID ENCODED IN BASE64> ApplicationID: <AZURE APPLICATIOM ID ENCODED IN BASE64> ApplicationSecret: <AZURE APPLICATION SECRET ENCODED IN BASE64> LogsAnalyticsWorkspaceID: <AZURE LOG ANALYTICS WORKSPACE ID ENCODED IN BASE64> AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "replicationPolicy:'{\"rules\":[ {\"rule_id\":\"ID goes here\", \"sync_deletions\": \"<true or false>\"\", \"destination_bucket\":object bucket name\"} ], \"log_replication_info\":{\"endpoint_type\":\"AZURE\"}}'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_hybrid_and_multicloud_resources/multicloud_object_gateway_bucket_replication
Installing an on-premise cluster with the Agent-based Installer
Installing an on-premise cluster with the Agent-based Installer OpenShift Container Platform 4.14 Installing an on-premise OpenShift Container Platform cluster with the Agent-based Installer Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_an_on-premise_cluster_with_the_agent-based_installer/index
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named exampleQueue . For more information, see Creating a queue . 3.2. Running your first example The example creates a consumer and producer for a queue named exampleQueue . It sends a text message and then receives it back, printing the received message to the console. Procedure Use Maven to build the examples by running the following command in the <install-dir> /examples/features/standard/queue directory. USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample On Windows: > java -cp "target\classes;target\dependency\*" org.apache.activemq.artemis.jms.example.QueueExample For example, running it on Linux results in the following output: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message The source code for the example is in the <install-dir> /examples/features/standard/queue/src directory. Additional examples are available in the <install-dir> /examples/features/standard directory.
[ "mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample", "> java -cp \"target\\classes;target\\dependency\\*\" org.apache.activemq.artemis.jms.example.QueueExample", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_core_protocol_jms_client/getting_started
Chapter 109. KafkaTopicSpec schema reference
Chapter 109. KafkaTopicSpec schema reference Used in: KafkaTopic Property Property type Description topicName string The name of the topic. When absent this will default to the metadata.name of the topic. It is recommended to not set this unless the topic name is not a valid OpenShift resource name. partitions integer The number of partitions the topic should have. This cannot be decreased after topic creation. It can be increased after topic creation, but it is important to understand the consequences that has, especially for topics with semantic partitioning. When absent this will default to the broker configuration for num.partitions . replicas integer The number of replicas the topic should have. When absent this will default to the broker configuration for default.replication.factor . config map The topic configuration.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaTopicSpec-reference
20.21. Creating a Guest Virtual Machine from a Configuration File
20.21. Creating a Guest Virtual Machine from a Configuration File Guest virtual machines can be created from XML configuration files. You can copy existing XML from previously created guest virtual machines or use the virsh dumpxml command. Example 20.49. How to create a guest virtual machine from an XML file The following example creates a new virtual machine from the existing guest1.xml configuration file. You need to have this file before beginning. You can retrieve the file using the virsh dumpxml command. See Example 20.48, "How to retrieve the XML file for a guest virtual machine" for instructions. # virsh create guest1.xml
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-domain_commands-creating_a_guest_virtual_machine_from_a_configuration_file
Preface
Preface Red Hat Quay container registry platform provides secure storage, distribution, and governance of containers and cloud-native artifacts on any infrastructure. It is available as a standalone component or as an Operator on OpenShift Container Platform. Red Hat Quay includes the following features and benefits: Granular security management Fast and robust at any scale High velocity CI/CD Automated installation and upates Enterprise authentication and team-based access control OpenShift Container Platform integration Red Hat Quay is regularly released, containing new features, bug fixes, and software updates. To upgrade Red Hat Quay for both standalone and OpenShift Container Platform deployments, see Upgrade Red Hat Quay . Important Red Hat Quay only supports rolling back, or downgrading, to z-stream versions, for example, 3.7.2 3.7.1. Rolling back to y-stream versions (3.7.0 3.6.0) is not supported. This is because Red Hat Quay updates might contain database schema upgrades that are applied when upgrading to a new version of Red Hat Quay. Database schema upgrades are not considered backwards compatible. Downgrading to z-streams is neither recommended nor supported by either Operator based deployments or virtual machine based deployments. Downgrading should only be done in extreme circumstances. The decision to rollback your Red Hat Quay deployment must be made in conjunction with the Red Hat Quay support and development teams. For more information, contact Red Hat Quay support. Documentation for Red Hat Quay is versioned with each release. The latest Red Hat Quay documentation is available from the Red Hat Quay Documentation page. Currently, version 3 is the latest major version. Note Prior to version 2.9.2, Red Hat Quay was called Quay Enterprise. Documentation for 2.9.2 and prior versions are archived on the Product Documentation for Red Hat Quay 2.9 page.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_release_notes/pr01
Chapter 4. Technology Preview
Chapter 4. Technology Preview This section lists Technology Preview features in Red Hat Developer Hub 1.4. Important Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. However, these features are not fully supported under Red Hat Subscription Level Agreements, may not be functionally complete, and are not intended for production use. As Red Hat considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features. See: Technology Preview support scope . 4.1. Added notification backend plugins With this update, Developer Hub includes the following dynamic plugins to manage and streamline notification delivery: @backstage/plugin-signals @backstage/plugin-notifications-backend @backstage/plugin-notifications @backstage/plugin-notifications-backend-module-email These plugins are disabled by default. Additional resources RHIDP-5545
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/release_notes/technology-preview
Chapter 1. Red Hat OpenShift support for Windows Containers overview
Chapter 1. Red Hat OpenShift support for Windows Containers overview Red Hat OpenShift support for Windows Containers is a feature providing the ability to run Windows compute nodes in an OpenShift Container Platform cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform. Windows instances deployed by the WMCO are configured with the containerd container runtime. For more information, see the release notes . You can add Windows nodes either by creating a compute machine set or by specifying existing Bring-Your-Own-Host (BYOH) Window instances through a configuration map . Note Compute machine sets are not supported for bare metal or provider agnostic clusters. For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads . You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads . You can create a Windows MachineSet object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet object on multiple platforms. You can schedule Windows workloads to Windows compute nodes. You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates. You can remove a Windows node by deleting a specific machine. You can use Bring-Your-Own-Host (BYOH) Windows instances to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users who are looking to mitigate major disruptions in the event that a Windows server goes offline. You can use BYOH Windows instances as nodes on OpenShift Container Platform 4.8 and later versions. You can disable Windows container workloads by performing the following: Uninstalling the Windows Machine Config Operator Deleting the Windows Machine Config Operator namespace
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/windows_container_support_for_openshift/windows-container-overview
Chapter 5. Installing a cluster on GCP with network customizations
Chapter 5. Installing a cluster on GCP with network customizations In OpenShift Container Platform version 4.15, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Google Cloud Platform (GCP). By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 5.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 5.1. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 5.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 5.2. Machine series for 64-bit ARM machines Tau T2A 5.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 5.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 5.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 5.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: 16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 17 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 18 region: us-central1 19 defaultMachinePlatform: tags: 20 - global-tag1 - global-tag2 osImage: 21 project: example-project-name name: example-image-name pullSecret: '{"auths": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24 1 15 18 19 22 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 16 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 20 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 21 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 17 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 23 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 24 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 5.5.8. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 5.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 5.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 5.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 5.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 5.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 5.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 5.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 5.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 5.8. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 5.9. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 5.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.2. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.3. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 5.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 5.5. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 5.6. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 5.7. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 5.8. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 5.9. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 5.10. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 5.11. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 5.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: 16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 17 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 18 region: us-central1 19 defaultMachinePlatform: tags: 20 - global-tag1 - global-tag2 osImage: 21 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 22 fips: false 23 sshKey: ssh-ed25519 AAAA... 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_gcp/installing-gcp-network-customizations
Chapter 2. Red Hat 3scale API Management 2.15.2 - Patch release
Chapter 2. Red Hat 3scale API Management 2.15.2 - Patch release 2.1. New features Red Hat 3scale API Management 2.15.2 introduces the following enhancements: Added compatibility with Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP). Added compatibility with Red Hat build of Keycloak version 26. 2.2. Resolved issues Red Hat 3scale API Management 2.15.2 addresses the following issues: Fixed an issue preventing reconnection to Redis after a dropped connection ( THREESCALE-11528 ). Applied security and stability improvements.
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/release_notes_for_red_hat_3scale_api_management_2.15_on-premises/red_hat_3scale_api_management_2_15_2_patch_release
2.42. RHEA-2011:0576 - new package: spice-vdagent
2.42. RHEA-2011:0576 - new package: spice-vdagent A new spice-vdagent package is now available for Red Hat Enterprise Linux 6. The new spice-vdagent package provides a SPICE agent for Linux guests. Bug Fixes BZ# 658464 The new spice-vdagent package allows for client window mode, automatic X session resolution adjustment to the client resolution, and copy and paste support for text and images between a guest's active X session and the SPICE client operating system. BZ# 680227 guest resolutions were not automatically aligned when multiple monitors were used. This update corrects the monitor settings. Now, the vdagent automatically aligns the guest resolution for one and more monitors. All SPICE virtual machines users should install this new package, which adds these features. Note: guests must be rebooted after installing this package for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/spice-vdagent_new
Chapter 5. Managing usability analytics and data collection from automation controller
Chapter 5. Managing usability analytics and data collection from automation controller You can change how you participate in usability analytics and data collection from automation controller by opting out or changing your settings in the automation controller user interface. 5.1. Usability analytics and data collection Usability data collection is included with automation controller to collect data to better understand how automation controller users specifically interact with automation controller, to help enhance future releases, and to continue streamlining your user experience. Only users installing a trial of automation controller or a fresh installation of automation controller are opted-in for this data collection. Additional resources For more information, see the Red Hat privacy policy . 5.1.1. Controlling data collection from automation controller You can control how automation controller collects data by setting your participation level in the User Interface tab in the Settings menu. Procedure Log in to your automation controller. Navigate to Settings User Interface . Select the desired level of data collection from the User Analytics Tracking State drop-down list: Off : Prevents any data collection. Anonymous : Enables data collection without your specific user data. Detailed : Enables data collection including your specific user data. Click Save to apply the settings or Cancel to discard the changes.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operations_guide/assembly-controlling-data-collection
24.4.4. Using the df Command
24.4.4. Using the df Command The df command allows you to display a detailed report on the system's disk space usage. To do so, type the following at a shell prompt: df For each listed file system, the df command displays its name ( Filesystem ), size ( 1K-blocks or Size ), how much space is used ( Used ), how much space is still available ( Available ), the percentage of space usage ( Use% ), and where is the file system mounted ( Mounted on ). For example: By default, the df command shows the partition size in 1 kilobyte blocks and the amount of used and available disk space in kilobytes. To view the information in megabytes and gigabytes, supply the -h command-line option, which causes df to display the values in a human-readable format: df -h For instance: For a complete list of available command-line options, see the df (1) manual page.
[ "~]USD df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_kvm-lv_root 18618236 4357360 13315112 25% / tmpfs 380376 288 380088 1% /dev/shm /dev/vda1 495844 77029 393215 17% /boot", "~]USD df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_kvm-lv_root 18G 4.2G 13G 25% / tmpfs 372M 288K 372M 1% /dev/shm /dev/vda1 485M 76M 384M 17% /boot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-sysinfo-filesystems-df
5.242. php
5.242. php 5.242.1. RHSA-2012:1046 - Moderate: php security update Updated php packages that fix multiple security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. PHP is an HTML-embedded scripting language commonly used with the Apache HTTP Server. Security Fixes CVE-2012-0057 It was discovered that the PHP XSL extension did not restrict the file writing capability of libxslt. A remote attacker could use this flaw to create or overwrite an arbitrary file that is writable by the user running PHP, if a PHP script processed untrusted eXtensible Style Sheet Language Transformations (XSLT) content. CVE-2012-1172 Note: This update disables file writing by default. A new PHP configuration directive, "xsl.security_prefs", can be used to enable file writing in XSLT. A flaw was found in the way PHP validated file names in file upload requests. A remote attacker could possibly use this flaw to bypass the sanitization of the uploaded file names, and cause a PHP script to store the uploaded file in an unexpected directory, by using a directory traversal attack. CVE-2012-2386 Multiple integer overflow flaws, leading to heap-based buffer overflows, were found in the way the PHP phar extension processed certain fields of tar archive files. A remote attacker could provide a specially-crafted tar archive file that, when processed by a PHP application using the phar extension, could cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running PHP. CVE-2010-2950 A format string flaw was found in the way the PHP phar extension processed certain PHAR files. A remote attacker could provide a specially-crafted PHAR file, which once processed in a PHP application using the phar extension, could lead to information disclosure and possibly arbitrary code execution via a crafted phar:// URI. CVE-2012-2143 A flaw was found in the DES algorithm implementation in the crypt() password hashing function in PHP. If the password string to be hashed contained certain characters, the remainder of the string was ignored when calculating the hash, significantly reducing the password strength. CVE-2012-2336 Note: With this update, passwords are no longer truncated when performing DES hashing. Therefore, new hashes of the affected passwords will not match stored hashes generated using vulnerable PHP versions, and will need to be updated. It was discovered that the fix for CVE-2012-1823, released via RHSA-2012:0546, did not properly filter all php-cgi command line arguments. A specially-crafted request to a PHP script could cause the PHP interpreter to execute the script in a loop, or output usage information that triggers an Internal Server Error. CVE-2012-0789 A memory leak flaw was found in the PHP strtotime() function call. A remote attacker could possibly use this flaw to cause excessive memory consumption by triggering many strtotime() function calls. CVE-2012-0781 A NULL pointer dereference flaw was found in the PHP tidy_diagnose() function. A remote attacker could use specially-crafted input to crash an application that uses tidy::diagnose. CVE-2011-4153 It was found that PHP did not check the zend_strndup() function's return value in certain cases. A remote attacker could possibly use this flaw to crash a PHP application. Upstream acknowledges Rubin Xu and Joseph Bonneau as the original reporters of CVE-2012-2143. All php users should upgrade to these updated packages, which contain backported patches to resolve these issues. After installing the updated packages, the httpd daemon must be restarted for the update to take effect. 5.242.2. RHSA-2013:1061 - Critical: php security update Updated php packages that fix one security issue are now available for Red Hat Enterprise Linux 5.3 Long Life, and Red Hat Enterprise Linux 5.6, 6.2 and 6.3 Extended Update Support. The Red Hat Security Response Team has rated this update as having critical security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. PHP is an HTML-embedded scripting language commonly used with the Apache HTTP Server. Security Fix CVE-2013-4113 A buffer overflow flaw was found in the way PHP parsed deeply nested XML documents. If a PHP application used the xml_parse_into_struct() function to parse untrusted XML content, an attacker able to supply specially-crafted XML could use this flaw to crash the application or, possibly, execute arbitrary code with the privileges of the user running the PHP interpreter. All php users should upgrade to these updated packages, which contain a backported patch to resolve this issue. After installing the updated packages, the httpd daemon must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/php
Instances and Images Guide
Instances and Images Guide Red Hat OpenStack Platform 16.0 Managing Instances and Images OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/instances_and_images_guide/index
2.3. Deployment
2.3. Deployment cpuspeed component, BZ# 626893 Some HP Proliant servers may report incorrect CPU frequency values in /proc/cpuinfo or /sys/device/system/cpu/*/cpufreq . This is due to the firmware manipulating the CPU frequency without providing any notification to the operating system. To avoid this ensure that the HP Power Regulator option in the BIOS is set to OS Control . An alternative available on more recent systems is to set Collaborative Power Control to Enabled . releng component, BZ# 644778 Some packages in the Optional repositories on RHN have multilib file conflicts. Consequently, these packages cannot have both the primary architecture (for example, x86_64) and secondary architecture (for example, i686) copies of the package installed on the same machine simultaneously. To work around this issue, install only one copy of the conflicting package. releng component The openmpi-psm and openmpi-psm-devel packages are not provided on architectures other than AMD64 and Intel 64 for Red Hat Enterprise Linux 6.2. If the openmpi-psm.i686 or/and openmpi-psm-devel.i686 packages are installed on a AMD64 or an Intel 64 system, remove these packages before you attempt to update Open MPI. grub component, BZ# 695951 On certain UEFI-based systems, you may need to type BOOTX64 rather than bootx64 to boot the installer due to case sensitivity issues. grub component, BZ# 698708 When rebuilding the grub package on the x86_64 architecture, the glibc-static.i686 package must be used. Using the glibc-static.x86_64 package will not meet the build requirements. parted component The parted utility in Red Hat Enterprise Linux 6 cannot handle Extended Address Volumes (EAV) Direct Access Storage Devices (DASD) that have more than 65535 cylinders. Consequently, EAV DASD drives cannot be partitioned using parted , and installation on EAV DASD drives will fail. To work around this issue, complete the installation on a non EAV DASD drive, then add the EAV device after the installation using the tools provided in the s390-utils package. PackageKit component If you are being asked repeatedly to enter your root password while using PackageKit to update your system via non-Red Hat repositories, you may be affected by the PackageKit issue described in Section 2.11, "Desktop" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/deployment_issues
Chapter 14. Concepts for database connection pools
Chapter 14. Concepts for database connection pools This section is intended when you want to understand considerations and best practices on how to configure database connection pools for Red Hat build of Keycloak. For a configuration where this is applied, visit Deploy Red Hat build of Keycloak for HA with the Red Hat build of Keycloak Operator . 14.1. Concepts Creating new database connections is expensive as it takes time. Creating them when a request arrives will delay the response, so it is good to have them created before the request arrives. It can also contribute to a stampede effect where creating a lot of connections in a short time makes things worse as it slows down the system and blocks threads. Closing a connection also invalidates all server side statements caching for that connection. For the best performance, the values for the initial, minimal and maximum database connection pool size should all be equal. This avoids creating new database connections when a new request comes in which is costly. Keeping the database connection open for as long as possible allows for server side statement caching bound to a connection. In the case of PostgreSQL, to use a server-side prepared statement, a query needs to be executed (by default) at least five times . See the PostgreSQL docs on prepared statements for more information.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/high_availability_guide/concepts-database-connections-
Chapter 8. Installing a private cluster on Azure
Chapter 8. Installing a private cluster on Azure In OpenShift Container Platform version 4.16, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 8.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 8.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 8.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 8.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 8.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 8.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 8.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 8.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 8.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 8.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 8.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 8.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 8.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 8.7.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 8.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 8.7.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 8.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 8.7.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 8.7.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 8.7.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24 1 10 14 21 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 20 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 8.7.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 8.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.9. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 8.9.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 8.9.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 8.9.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 8.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 8.9.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 8.9.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 8.10. Optional: Preparing a private Microsoft Azure cluster for a private image registry By installing a private image registry on a private Microsoft Azure cluster, you can create private storage endpoints. Private storage endpoints disable public facing endpoints to the registry's storage account, adding an extra layer of security to your OpenShift Container Platform deployment. Important Do not install a private image registry on Microsoft Azure Red Hat OpenShift (ARO), because the endpoint can put your Microsoft Azure Red Hat OpenShift cluster in an unrecoverable state. Use the following guide to prepare your private Microsoft Azure cluster for installation with a private image registry. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI (oc). You have prepared an install-config.yaml that includes the following information: The publish field is set to Internal You have set the permissions for creating a private storage endpoint. For more information, see "Azure permissions for installer-provisioned infrastructure". Procedure If you have not previously created installation manifest files, do so by running the following command: USD ./openshift-install create manifests --dir <installation_directory> This command displays the following messages: Example output INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift Create an image registry configuration object and pass in the networkResourceGroupName , subnetName , and vnetName provided by Microsoft Azure. For example: USD touch imageregistry-config.yaml apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: managementState: "Managed" replicas: 2 rolloutStrategy: RollingUpdate storage: azure: networkAccess: internal: networkResourceGroupName: <vnet_resource_group> 1 subnetName: <subnet_name> 2 vnetName: <vnet_name> 3 type: Internal 1 Optional. If you have an existing VNet and subnet setup, replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Optional. If you have an existing VNet and subnet setup, replace <subnet_name> with the name of the existing compute subnet within the specified resource group. 3 Optional. If you have an existing VNet and subnet setup, replace <vnet_name> with the name of the existing virtual network (VNet) in the specified resource group. Note The imageregistry-config.yaml file is consumed during the installation process. If desired, you must back it up before installation. Move the imageregistry-config.yaml file to the <installation_directory/manifests> folder by running the following command: USD mv imageregistry-config.yaml <installation_directory/manifests/> steps After you have moved the imageregistry-config.yaml file to the <installation_directory/manifests> folder and set the required permissions, proceed to "Deploying the cluster". Additional resources For the list of permissions needed to create a private storage endpoint, see Required Azure permissions for installer-provisioned infrastructure . 8.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create manifests --dir <installation_directory>", "INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift", "touch imageregistry-config.yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: managementState: \"Managed\" replicas: 2 rolloutStrategy: RollingUpdate storage: azure: networkAccess: internal: networkResourceGroupName: <vnet_resource_group> 1 subnetName: <subnet_name> 2 vnetName: <vnet_name> 3 type: Internal", "mv imageregistry-config.yaml <installation_directory/manifests/>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure/installing-azure-private
5.6. Controlling Traffic
5.6. Controlling Traffic 5.6.1. Predefined Services Services can be added and removed using the graphical firewall-config tool, firewall-cmd , and firewall-offline-cmd . Alternatively, you can edit the XML files in the /etc/firewalld/services/ directory. If a service is not added or changed by the user, then no corresponding XML file is found in /etc/firewalld/services/ . The files in the /usr/lib/firewalld/services/ directory can be used as templates if you want to add or change a service. 5.6.2. Disabling All Traffic in Case of Emergency using CLI In an emergency situation, such as a system attack, it is possible to disable all network traffic and cut off the attacker. To immediately disable networking traffic, switch panic mode on: Switching off panic mode reverts the firewall to its permanent settings. To switch panic mode off: To see whether panic mode is switched on or off, use: 5.6.3. Controlling Traffic with Predefined Services using CLI The most straightforward method to control traffic is to add a predefined service to firewalld . This opens all necessary ports and modifies other settings according to the service definition file . Check that the service is not already allowed: List all predefined services: Add the service to the allowed services: Make the new settings persistent: 5.6.4. Controlling Traffic with Predefined Services using GUI To enable or disable a predefined or custom service, start the firewall-config tool and select the network zone whose services are to be configured. Select the Services tab and select the check box for each type of service you want to trust. Clear the check box to block a service. To edit a service, start the firewall-config tool and select Permanent from the menu labeled Configuration . Additional icons and menu buttons appear at the bottom of the Services window. Select the service you want to configure. The Ports , Protocols , and Source Port tabs enable adding, changing, and removing of ports, protocols, and source port for the selected service. The modules tab is for configuring Netfilter helper modules. The Destination tab enables limiting traffic to a particular destination address and Internet Protocol ( IPv4 or IPv6 ). Note It is not possible to alter service settings in Runtime mode. 5.6.5. Adding New Services Services can be added and removed using the graphical firewall-config tool, firewall-cmd , and firewall-offline-cmd . Alternatively, you can edit the XML files in /etc/firewalld/services/ . If a service is not added or changed by the user, then no corresponding XML file are found in /etc/firewalld/services/ . The files /usr/lib/firewalld/services/ can be used as templates if you want to add or change a service. To add a new service in a terminal, use firewall-cmd , or firewall-offline-cmd in case of not active firewalld . enter the following command to add a new and empty service: To add a new service using a local file, use the following command: You can change the service name with the additional --name= service-name option. As soon as service settings are changed, an updated copy of the service is placed into /etc/firewalld/services/ . As root , you can enter the following command to copy a service manually: firewalld loads files from /usr/lib/firewalld/services in the first place. If files are placed in /etc/firewalld/services and they are valid, then these will override the matching files from /usr/lib/firewalld/services . The overriden files in /usr/lib/firewalld/services are used as soon as the matching files in /etc/firewalld/services have been removed or if firewalld has been asked to load the defaults of the services. This applies to the permanent environment only. A reload is needed to get these fallbacks also in the runtime environment. 5.6.6. Controlling Ports using CLI Ports are logical devices that enable an operating system to receive and distinguish network traffic and forward it accordingly to system services. These are usually represented by a daemon that listens on the port, that is it waits for any traffic coming to this port. Normally, system services listen on standard ports that are reserved for them. The httpd daemon, for example, listens on port 80. However, system administrators by default configure daemons to listen on different ports to enhance security or for other reasons. Opening a Port Through open ports, the system is accessible from the outside, which represents a security risk. Generally, keep ports closed and only open them if they are required for certain services. To get a list of open ports in the current zone: List all allowed ports: Add a port to the allowed ports to open it for incoming traffic: Make the new settings persistent: The port types are either tcp , udp , sctp , or dccp . The type must match the type of network communication. Closing a Port When an open port is no longer needed, close that port in firewalld . It is highly recommended to close all unnecessary ports as soon as they are not used because leaving a port open represents a security risk. To close a port, remove it from the list of allowed ports: List all allowed ports: Remove the port from the allowed ports to close it for the incoming traffic: Make the new settings persistent: 5.6.7. Opening Ports using GUI To permit traffic through the firewall to a certain port, start the firewall-config tool and select the network zone whose settings you want to change. Select the Ports tab and click the Add button on the right-hand side. The Port and Protocol window opens. Enter the port number or range of ports to permit. Select tcp or udp from the list. 5.6.8. Controlling Traffic with Protocols using GUI To permit traffic through the firewall using a certain protocol, start the firewall-config tool and select the network zone whose settings you want to change. Select the Protocols tab and click the Add button on the right-hand side. The Protocol window opens. Either select a protocol from the list or select the Other Protocol check box and enter the protocol in the field. 5.6.9. Opening Source Ports using GUI To permit traffic through the firewall from a certain port, start the firewall-config tool and select the network zone whose settings you want to change. Select the Source Port tab and click the Add button on the right-hand side. The Source Port window opens. Enter the port number or range of ports to permit. Select tcp or udp from the list.
[ "~]# firewall-cmd --panic-on", "~]# firewall-cmd --panic-off", "~]# firewall-cmd --query-panic", "~]# firewall-cmd --list-services ssh dhcpv6-client", "~]# firewall-cmd --get-services RH-Satellite-6 amanda-client amanda-k5-client bacula bacula-client bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine condor-collector ctdb dhcp dhcpv6 dhcpv6-client dns docker-registry [output truncated]", "~]# firewall-cmd --add-service= <service-name>", "~]# firewall-cmd --runtime-to-permanent", "~]USD firewall-cmd --new-service= service-name", "~]USD firewall-cmd --new-service-from-file= service-name .xml", "~]# cp /usr/lib/firewalld/services/ service-name .xml /etc/firewalld/services/ service-name .xml", "~]# firewall-cmd --list-ports", "~]# firewall-cmd --add-port= port-number / port-type", "~]# firewall-cmd --runtime-to-permanent", "~]# firewall-cmd --list-ports [WARNING] ==== This command will only give you a list of ports that have been opened as ports. You will not be able to see any open ports that have been opened as a service. Therefore, you should consider using the --list-all option instead of --list-ports. ====", "~]# firewall-cmd --remove-port= port-number / port-type", "~]# firewall-cmd --runtime-to-permanent" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-controlling_traffic
Chapter 1. Introduction to RHEL System Roles
Chapter 1. Introduction to RHEL System Roles By using RHEL System Roles, you can remotely manage the system configurations of multiple RHEL systems across major versions of RHEL. RHEL System Roles is a collection of Ansible roles and modules. To use it to configure systems, you must use the following components: Control node A control node is the system from which you run Ansible commands and playbooks. Your control node can be an Ansible Automation Platform, Red Hat Satellite, or a RHEL 9, 8, or 7 host. For more information, see Preparing a control node on RHEL 8 . Managed node Managed nodes are the servers and network devices that you manage with Ansible. Managed nodes are also sometimes called hosts. Ansible does not have to be installed on managed nodes. For more information, see Preparing a managed node . Ansible playbook In a playbook, you define the configuration you want to achieve on your managed nodes or a set of steps for the system on the managed node to perform. Playbooks are Ansible's configuration, deployment, and orchestration language. Inventory In an inventory file, you list the managed nodes and specify information such as IP address for each managed node. In an inventory, you can also organize managed nodes, creating and nesting groups for easier scaling. An inventory file is also sometimes called a hostfile. On Red Hat Enterprise Linux 8, you can use the following roles provided by the rhel-system-roles package, which is available in the AppStream repository: Role name Role description Chapter title certificate Certificate Issuance and Renewal Requesting certificates using RHEL System Roles cockpit Web console Installing and configuring web console with the cockpit RHEL System Role crypto_policies System-wide cryptographic policies Setting a custom cryptographic policy across systems firewall Firewalld Configuring firewalld using System Roles ha_cluster HA Cluster Configuring a high-availability cluster using System Roles kdump Kernel Dumps Configuring kdump using RHEL System Roles kernel_settings Kernel Settings Using Ansible roles to permanently configure kernel parameters logging Logging Using the logging System Role metrics Metrics (PCP) Monitoring performance using RHEL System Roles microsoft.sql.server Microsoft SQL Server Configuring Microsoft SQL Server using the microsoft.sql.server Ansible role network Networking Using the network RHEL System Role to manage InfiniBand connections nbde_client Network Bound Disk Encryption client Using the nbde_client and nbde_server System Roles nbde_server Network Bound Disk Encryption server Using the nbde_client and nbde_server System Roles postfix Postfix Variables of the postfix role in System Roles selinux SELinux Configuring SELinux using System Roles ssh SSH client Configuring secure communication with the ssh System Roles sshd SSH server Configuring secure communication with the ssh System Roles storage Storage Managing local storage using RHEL System Roles tlog Terminal Session Recording Configuring a system for session recording using the tlog RHEL System Role timesync Time Synchronization Configuring time synchronization using RHEL System Roles vpn VPN Configuring VPN connections with IPsec by using the vpn RHEL System Role Additional resources Red Hat Enterprise Linux (RHEL) System Roles /usr/share/doc/rhel-system-roles/ provided by the rhel-system-roles package
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/intro-to-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 5. Uninstalling a cluster on Nutanix
Chapter 5. Uninstalling a cluster on Nutanix You can remove a cluster that you deployed to Nutanix. 5.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_nutanix/uninstalling-cluster-nutanix
Part II. Certifying or Validating Containerized Applications
Part II. Certifying or Validating Containerized Applications
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/con_container-certification_configuring-the-system-and-running-tests-by-using-cockpit-for-non-containerized-application
Global File System 2
Global File System 2 Red Hat Enterprise Linux 7 Configuring and managing GFS2 file systems Steven Levine Red Hat Customer Content Services [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/index
12.5. iSCSI-based Storage Pools
12.5. iSCSI-based Storage Pools This section covers using iSCSI-based devices to store guest virtual machines. iSCSI (Internet Small Computer System Interface) is a network protocol for sharing storage devices. iSCSI connects initiators (storage clients) to targets (storage servers) using SCSI instructions over the IP layer. 12.5.1. Configuring a Software iSCSI Target The scsi-target-utils package provides a tool for creating software-backed iSCSI targets. Procedure 12.4. Creating an iSCSI target Install the required packages Install the scsi-target-utils package and all dependencies Start the tgtd service The tgtd service host physical machines SCSI targets and uses the iSCSI protocol to host physical machine targets. Start the tgtd service and make the service persistent after restarting with the chkconfig command. Optional: Create LVM volumes LVM volumes are useful for iSCSI backing images. LVM snapshots and resizing can be beneficial for guest virtual machines. This example creates an LVM image named virtimage1 on a new volume group named virtstore on a RAID5 array for hosting guest virtual machines with iSCSI. Create the RAID array Creating software RAID5 arrays is covered by the Red Hat Enterprise Linux Deployment Guide . Create the LVM volume group Create a volume group named virtstore with the vgcreate command. Create a LVM logical volume Create a logical volume group named virtimage1 on the virtstore volume group with a size of 20GB using the lvcreate command. The new logical volume, virtimage1 , is ready to use for iSCSI. Optional: Create file-based images File-based storage is sufficient for testing but is not recommended for production environments or any significant I/O activity. This optional procedure creates a file based imaged named virtimage2.img for an iSCSI target. Create a new directory for the image Create a new directory to store the image. The directory must have the correct SELinux contexts. Create the image file Create an image named virtimage2.img with a size of 10GB. Configure SELinux file contexts Configure the correct SELinux context for the new image and directory. The new file-based image, virtimage2.img , is ready to use for iSCSI. Create targets Targets can be created by adding a XML entry to the /etc/tgt/targets.conf file. The target attribute requires an iSCSI Qualified Name (IQN). The IQN is in the format: Where: yyyy - mm represents the year and month the device was started (for example: 2010-05 ); reversed domain name is the host physical machines domain name in reverse (for example server1.example.com in an IQN would be com.example.server1 ); and optional identifier text is any text string, without spaces, that assists the administrator in identifying devices or hardware. This example creates iSCSI targets for the two types of images created in the optional steps on server1.example.com with an optional identifier trial . Add the following to the /etc/tgt/targets.conf file. Ensure that the /etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI. The driver uses iSCSI by default. Important This example creates a globally accessible target without access control. Refer to the scsi-target-utils for information on implementing secure access. Restart the tgtd service Restart the tgtd service to reload the configuration changes. iptables configuration Open port 3260 for iSCSI access with iptables . Verify the new targets View the new targets to ensure the setup was successful with the tgt-admin --show command. Warning The ACL list is set to all. This allows all systems on the local network to access this device. It is recommended to set host physical machine access ACLs for production environments. Optional: Test discovery Test whether the new iSCSI device is discoverable. Optional: Test attaching the device Attach the new device ( iqn.2010-05.com.example.server1:iscsirhel6guest ) to determine whether the device can be attached. Detach the device. An iSCSI device is now ready to use for virtualization.
[ "yum install scsi-target-utils", "service tgtd start chkconfig tgtd on", "vgcreate virtstore /dev/md1", "lvcreate --size 20G -n virtimage1 virtstore", "mkdir -p /var/lib/tgtd/ virtualization", "dd if=/dev/zero of=/var/lib/tgtd/ virtualization / virtimage2.img bs=1M seek=10000 count=0", "restorecon -R /var/lib/tgtd", "iqn. yyyy - mm . reversed domain name : optional identifier text", "<target iqn.2010-05.com.example. server1 : iscsirhel6guest > backing-store /dev/ virtstore / virtimage1 #LUN 1 backing-store /var/lib/tgtd/ virtualization / virtimage2.img #LUN 2 write-cache off </target>", "service tgtd restart", "iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT service iptables save service iptables restart", "tgt-admin --show Target 1: iqn.2010-05.com.example.server1:iscsirhel6guest System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: None LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 20000 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: /dev/ virtstore / virtimage1 LUN: 2 Type: disk SCSI ID: IET 00010002 SCSI SN: beaf12 Size: 10000 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: /var/lib/tgtd/ virtualization / virtimage2.img Account information: ACL information: ALL", "iscsiadm --mode discovery --type sendtargets --portal server1.example.com 127.0.0.1:3260,1 iqn.2010-05.com.example.server1:iscsirhel6guest", "iscsiadm -d2 -m node --login scsiadm: Max file limits 1024 1024 Logging in to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] Login to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] successful.", "iscsiadm -d2 -m node --logout scsiadm: Max file limits 1024 1024 Logging out of session [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260 Logout of [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] successful." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-storage_pools-creating-iscsi
Chapter 16. Networking
Chapter 16. Networking Mellanox SR-IOV Support Single Root I/O Virtualization (SR-IOV) is now supported as a Technology Preview in the Mellanox libmlx4 library and the following drivers: mlx_core mlx4_ib (InfiniBand protocol) mlx_en (Ethernet protocol) Package: kernel QFQ queuing discipline In Red Hat Enterprise Linux 6, the tc utility has been updated to work with the Quick Fair Scheduler (QFQ) kernel features. Users can now take advantage of the new QFQ traffic queuing discipline from userspace. This feature is considered a Technology Preview. Package: kernel
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/chap-red_hat_enterprise_linux-6.10_technical_notes-technology_previews-networking
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/configuring_red_hat_build_of_openjdk_17_on_rhel_with_fips/proc-providing-feedback-on-redhat-documentation
Chapter 4. Security and Authentication of HawtIO
Chapter 4. Security and Authentication of HawtIO Note You can enable access logging on the runtimes/containers (e.g. Quarkus, OpenShift) as a security defensive measure for validating access. Access records can be used to investigate access attempts in the event of a security incident. HawtIO enables authentication out of the box depending on the runtimes/containers it runs with. To use HawtIO with your application, either setting up authentication for the runtime or disabling HawtIO authentication is necessary. 4.1. Configuration properties The following table lists the Security-related configuration properties for the HawtIO core system. Name Default Description hawtio.authenticationContainerDiscoveryClasses io.hawt.web.tomcat.TomcatAuthenticationContainerDiscovery List of used AuthenticationContainerDiscovery implementations separated by a comma. By default, there is just TomcatAuthenticationContainerDiscovery, which is used to authenticate users on Tomcat from tomcat-users.xml file. Feel free to remove it if you want to authenticate users on Tomcat from the configured JAAS login module or feel free to add more classes of your own. hawtio.authenticationContainerTomcatDigestAlgorithm NONE When using the Tomcat tomcat-users.xml file, passwords can be hashed instead of plain text. Use this to specify the digest algorithm; valid values are NONE, MD5, SHA, SHA-256, SHA-384, and SHA-512. hawtio.authenticationEnabled true Whether or not security is enabled. hawtio.keycloakClientConfig classpath:keycloak.json Keycloak configuration file used for the front end. It is mandatory if Keycloak integration is enabled. hawtio.keycloakEnabled false Whether to enable or disable Keycloak integration. hawtio.noCredentials401 false Whether to return HTTP status 401 when authentication is enabled, but no credentials have been provided. Returning 401 will cause the browser popup window to prompt for credentials. By default this option is false, returning HTTP status 403 instead. hawtio.realm hawtio The security realm used to log in. hawtio.rolePrincipalClasses Fully qualified principal class name(s). A comma can separate multiple classes. hawtio.roles Admin, manager, viewer The user roles are required to log in to the console. A comma can separate multiple roles to allow. Set to * or an empty value to disable role checking when HawtIO authenticates a user. hawtio.tomcatUserFileLocation conf/tomcat-users.xml Specify an alternative location for the tomcat-users.xml file, e.g. /production/userlocation/. 4.2. Quarkus HawtIO is secured with the authentication mechanisms that Quarkus and also Keycloak provide. If you want to disable HawtIO authentication for Quarkus, add the following configuration to application.properties : quarkus.hawtio.authenticationEnabled = false 4.2.1. Quarkus authentication mechanisms HawtIO is just a web application in terms of Quarkus, so the various mechanisms Quarkus provides are used to authenticate HawtIO in the same way it authenticates a Web application. Here we show how you can use the properties-based authentication with HawtIO for demonstrating purposes. Important The properties-based authentication is not recommended for use in production. This mechanism is for development and testing purposes only. To use the properties-based authentication with HawtIO, add the following dependency to pom.xml : <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency> You can then define users in application.properties to enable the authentication. For example, defining a user hawtio with password s3cr3t! and role admin would look like the following: quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = admin Example: See Quarkus example for a working example of the properties-based authentication. 4.2.2. Quarkus with Keycloak See Keycloak Integration - Quarkus . 4.3. Spring Boot In addition to the standard JAAS authentication, HawtIO on Spring Boot can be secured through Spring Security or Keycloak . If you want to disable HawtIO authentication for Spring Boot, add the following configuration to application.properties : hawtio.authenticationEnabled = false 4.3.1. Spring Security To use Spring Security with HawtIO: Add org.springframework.boot:spring-boot-starter-security to the dependencies in pom.xml : <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> Spring Security configuration in src/main/resources/application.properties should look like the following: spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewer A security config class has to be defined to set up how to secure the application with Spring Security: @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .authorizeHttpRequests(authorize -> authorize .anyRequest().authenticated() ) .formLogin(withDefaults()) .httpBasic(withDefaults()) .csrf(csrf -> csrf .csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse()) .csrfTokenRequestHandler(new SpaCsrfTokenRequestHandler()) ) .addFilterAfter(new CsrfCookieFilter(), BasicAuthenticationFilter.class); return http.build(); } } Note Refreshing the token after authentication success and logout success is required because the CsrfAuthenticationStrategy and CsrfLogoutHandler will clear the token. The client application will not be able to perform an unsafe HTTP request, such as a POST, without obtaining a fresh token. Example: See springboot-security example for a working example. 4.3.1.1. Connecting to a remote application with Spring Security If you try to connect to a remote Spring Boot application with Spring Security enabled, make sure the Spring Security configuration allows access from the HawtIO console. Most likely, the default CSRF protection prohibits remote access to the Jolokia endpoint and thus causes authentication failures at the HawtIO console. Warning Be aware that it will expose your application to the risk of CSRF attacks. The easiest solution is to disable CSRF protection for the Jolokia endpoint at the remote application as follows. import org.springframework.boot.actuate.autoconfigure.jolokia.JolokiaEndpoint; import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { ... // Disable CSRF protection for the Jolokia endpoint http.csrf().ignoringRequestMatchers(EndpointRequest.to(JolokiaEndpoint.class)); return http.build(); } } To secure the Jolokia endpoint even without Spring Security's CSRF protection, you need to provide a jolokia-access.xml file under src/main/resources/ like the following (snippet) so that only trusted nodes can access it: <restrict> ... <cors> <allow-origin>http*://localhost:*</allow-origin> <allow-origin>http*://127.0.0.1:*</allow-origin> <allow-origin>http*://*.example.com</allow-origin> <allow-origin>http*://*.example.com:*</allow-origin> <strict-checking /> </cors> </restrict> 4.3.2. Spring Boot with Keycloak See Keycloak Integration - Spring Boot . 4.4. Keycloak Integration You can secure your HawtIO console with Keycloak . To integration HawtIO with Keycloak, you need to: Prepare Keycloak server Deploy HawtIO to your favourite runtime (Quarkus, Spring Boot, WildFly, Karaf, Jetty, Tomcat, etc.) and configure it to use Keycloak for authentication 4.4.1. Prepare Keycloak server Install and run Keycloak server. The easiest way is to use a Docker image : docker run -d --name keycloak \ -p 18080:8080 \ -e KEYCLOAK_ADMIN=admin \ -e KEYCLOAK_ADMIN_PASSWORD=admin \ quay.io/keycloak/keycloak start-dev Here we use port number 18080 for the Keycloak server to avoid potential conflicts with the ports other applications might use. You can log in to the Keycloak admin console http://localhost:18080/admin/ with user admin / password admin . Import hawtio-demo-realm.json into Keycloak. To do so, click Create Realm button and then import hawtio-demo-realm.json . It will create hawtio-demo realm. The hawtio-demo realm has the hawtio-client application installed as a public client, and defines a couple of realm roles such as admin and viewer . The names of these roles are the same as the default HawtIO roles, which are allowed to log in to HawtIO admin console and to JMX. There are also 3 users: admin User with password admin and role admin , who is allowed to login into HawtIO. viewer User with password viewer and role viewer , who is allowed to login into HawtIO. jdoe User with password password and no role assigned, who is not allowed to login into HawtIO. Note Currently, the difference in roles does not affect HawtIO access rights on Quarkus and Spring Boot, as HawtIO RBAC functionality is not yet implemented on those runtimes. 4.4.2. Configuration HawtIO's configuration for Keycloak integration consists of two parts: integration with Keycloak in the runtime (server side), and integration with Keycloak in the HawtIO console (client side). The following settings need to be made for each part: Server side The runtime-specific configuration for the Keycloak adapter Client side The HawtIO Keycloak configuration keycloak-hawtio.json 4.4.2.1. Quarkus Firstly, apply the required configuration for attaching HawtIO to a Quarkus application. What you need to integrate your Quarkus application with Keycloak is Quarkus OIDC extension. Add the following dependency to pom.xml : pom.xml <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> 4.4.2.1.1. Server side Then add the following lines to application.properties (which configures the server-side OIDC extension): application.properties quarkus.oidc.auth-server-url = http://localhost:18080/realms/hawtio-demo quarkus.oidc.client-id = hawtio-client quarkus.oidc.credentials.secret = secret quarkus.oidc.application-type = web-app quarkus.oidc.token-state-manager.split-tokens = true quarkus.http.auth.permission.authenticated.paths = "/*" quarkus.http.auth.permission.authenticated.policy = authenticated Important quarkus.oidc.token-state-manager.split-tokens = true is important, as otherwise you might encounter a large size session cookie token issue and fail to integrate with Keycloak. 4.4.2.1.2. Client side Finally create keycloak-hawtio.json under src/main/resources in the Quarkus application project (which serves as the client-side HawtIO JS configuration): keycloak-hawtio.json { "realm": "hawtio-demo", "clientId": "hawtio-client", "url": "http://localhost:18080/", "jaas": false, "pkceMethod": "S256" } Note Set pkceMethod to S256 depending on Proof Key for Code Exchange Code Challenge Method advanced settings configuration. If PKCE is not enabled, do not set this option. Build and run the project and it will be integrated with Keycloak. 4.4.2.1.3. Example See quarkus-keycloak example for a working example. 4.4.2.2. Spring Boot Firstly, apply [the required configuration for attaching HawtIO to a Spring Boot application. What you need to integrate your Spring Boot application with Keycloak is to add the following dependency to pom.xml (replace 4.x.y with the latest HawtIO release version): pom.xml <dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-springboot-keycloak</artifactId> <version>4.x.y</version> </dependency> 4.4.2.2.1. Server side Then add the following lines in application.properties (which configures the server-side Keycloak adapter): application.properties keycloak.realm = hawtio-demo keycloak.resource = hawtio-client keycloak.auth-server-url = http://localhost:18080/ keycloak.ssl-required = external keycloak.public-client = true keycloak.principal-attribute = preferred_username 4.4.2.2.2. Client side Finally create keycloak-hawtio.json under src/main/resources in the Spring Boot project (which serves as the client-side HawtIO JS configuration): keycloak-hawtio.json { "realm": "hawtio-demo", "clientId": "hawtio-client", "url": "http://localhost:18080/", "jaas": false } Build and run the project and it will be integrated with Keycloak. 4.4.2.2.3. Example See springboot-keycloak example for a working example.
[ "quarkus.hawtio.authenticationEnabled = false", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-elytron-security-properties-file</artifactId> </dependency>", "quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.hawtio = s3cr3t! quarkus.security.users.embedded.roles.hawtio = admin", "hawtio.authenticationEnabled = false", "<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency>", "spring.security.user.name = hawtio spring.security.user.password = s3cr3t! spring.security.user.roles = admin,viewer", "@EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .authorizeHttpRequests(authorize -> authorize .anyRequest().authenticated() ) .formLogin(withDefaults()) .httpBasic(withDefaults()) .csrf(csrf -> csrf .csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse()) .csrfTokenRequestHandler(new SpaCsrfTokenRequestHandler()) ) .addFilterAfter(new CsrfCookieFilter(), BasicAuthenticationFilter.class); return http.build(); } }", "import org.springframework.boot.actuate.autoconfigure.jolokia.JolokiaEndpoint; import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest; @EnableWebSecurity public class SecurityConfig { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { // Disable CSRF protection for the Jolokia endpoint http.csrf().ignoringRequestMatchers(EndpointRequest.to(JolokiaEndpoint.class)); return http.build(); } }", "<restrict> <cors> <allow-origin>http*://localhost:*</allow-origin> <allow-origin>http*://127.0.0.1:*</allow-origin> <allow-origin>http*://*.example.com</allow-origin> <allow-origin>http*://*.example.com:*</allow-origin> <strict-checking /> </cors> </restrict>", "docker run -d --name keycloak -p 18080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak start-dev", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency>", "quarkus.oidc.auth-server-url = http://localhost:18080/realms/hawtio-demo quarkus.oidc.client-id = hawtio-client quarkus.oidc.credentials.secret = secret quarkus.oidc.application-type = web-app quarkus.oidc.token-state-manager.split-tokens = true quarkus.http.auth.permission.authenticated.paths = \"/*\" quarkus.http.auth.permission.authenticated.policy = authenticated", "{ \"realm\": \"hawtio-demo\", \"clientId\": \"hawtio-client\", \"url\": \"http://localhost:18080/\", \"jaas\": false, \"pkceMethod\": \"S256\" }", "<dependency> <groupId>io.hawt</groupId> <artifactId>hawtio-springboot-keycloak</artifactId> <version>4.x.y</version> </dependency>", "keycloak.realm = hawtio-demo keycloak.resource = hawtio-client keycloak.auth-server-url = http://localhost:18080/ keycloak.ssl-required = external keycloak.public-client = true keycloak.principal-attribute = preferred_username", "{ \"realm\": \"hawtio-demo\", \"clientId\": \"hawtio-client\", \"url\": \"http://localhost:18080/\", \"jaas\": false }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/hawtio_diagnostic_console_guide/security-and-authentication-of-hawtio
Chapter 1. Getting Started with Camel Spring Boot
Chapter 1. Getting Started with Camel Spring Boot This guide introduces Camel Spring Boot and demonstrates how to get started building an application using Camel Spring Boot: Section 1.1, "Camel Spring Boot starters" Section 1.2, "Spring Boot" Section 1.3, "Component Starters" Section 1.4, "Starter Configuration" Section 1.5, "Generating a Camel for Spring Boot application using Maven" Section 1.7, "Applying patch to Camel Spring Boot" Section 1.8, "Camel REST DSL OpenApi Maven Plugin" Section 1.9, "Support for FIPS Compliance" 1.1. Camel Spring Boot starters Camel support for Spring Boot provides auto-configuration of the Camel and starters for many Camel components . The opinionated auto-configuration of the Camel context auto-detects Camel routes available in the Spring context and registers the key Camel utilities (such as producer template, consumer template and the type converter) as beans. Note For information about using a Maven archtype to generate a Camel for Spring Boot application see Generating a Camel for Spring Boot application using Maven . To get started, you must add the Camel Spring Boot BOM to your Maven pom.xml file. <dependencyManagement> <dependencies> <!-- Camel BOM --> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>3.20.1.redhat-00109</version> <type>pom</type> <scope>import</scope> </dependency> <!-- ... other BOMs or dependencies ... --> </dependencies> </dependencyManagement> The camel-spring-boot-bom is a basic BOM that contains the list of Camel Spring Boot starter JARs. , add the Camel Spring Boot starter to startup the Camel Context . <dependencies> <!-- Camel Starter --> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-starter</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies> You must also add any component starters that your Spring Boot application requires. The following example shows how to add the auto-configuration starter to the MQTT5 component <dependencies> <!-- ... other dependencies ... --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-mqtt5</artifactId> </dependency> </dependencies> 1.1.1. Camel Spring Boot BOM vs Camel Spring Boot Dependencies BOM The curated camel-spring-boot-dependencies BOM, which is generated, contains the adjusted JARs that both Spring Boot and Apache Camel use to avoid any conflicts. This BOM is used to test camel-spring-boot itself. Spring Boot users may choose to use pure Camel dependencies by using the camel-spring-boot-bom that only has the Camel starter JARs as managed dependencies. However, this may lead to a classpath conflict if a third-party JAR from Spring Boot is not compatible with a particular Camel component. 1.1.2. Spring Boot configuration support Each starter lists configuration parameters you can configure in the standard application.properties or application.yml files. These parameters have the form of camel.component.[component-name].[parameter] . For example to configure the URL of the MQTT5 broker you can set: 1.1.3. Adding Camel routes Camel routes are detected in the Spring application context, for example a route annotated with org.springframework.stereotype.Component will be loaded, added to the Camel context and run. import org.apache.camel.builder.RouteBuilder; import org.springframework.stereotype.Component; @Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from("...") .to("..."); } } 1.2. Spring Boot Spring Boot automatically configures Camel for you. The opinionated auto-configuration of the Camel context auto-detects Camel routes available in the Spring context and registers the key Camel utilities (like producer template, consumer template and the type converter) as beans. Maven users will need to add the following dependency to their pom.xml in order to use this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot</artifactId> <version>3.20.1.redhat-00109</version> <!-- use the same version as your Camel core version --> </dependency> camel-spring-boot jar comes with the spring.factories file, so as soon as you add that dependency into your classpath, Spring Boot will automatically auto-configure Camel for you. 1.2.1. Camel Spring Boot Starter Apache Camel ships a Spring Boot Starter module that allows you to develop Spring Boot applications using starters. There is a sample application in the source code also. To use the starter, add the following to your spring boot pom.xml file: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>3.20.1.redhat-00109</version> <!-- use the same version as your Camel core version --> </dependency> Then you can just add classes with your Camel routes such as: package com.example; import org.apache.camel.builder.RouteBuilder; import org.springframework.stereotype.Component; @Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from("timer:foo").to("log:bar"); } } Then these routes will be started automatically. You can customize the Camel application in the application.properties or application.yml file. 1.2.2. Spring Boot Auto-configuration When using spring-boot with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-starter</artifactId> <version>3.20.1.redhat-00109</version> <!-- use the same version as your Camel core version --> </dependency> 1.2.3. Auto-configured Camel context The most important piece of functionality provided by the Camel auto-configuration is the CamelContext instance. Camel auto-configuration creates a SpringCamelContext for you and takes care of the proper initialization and shutdown of that context. The created Camel context is also registered in the Spring application context (under the camelContext bean name), so you can access it like any other Spring bean. @Configuration public class MyAppConfig { @Autowired CamelContext camelContext; @Bean MyService myService() { return new DefaultMyService(camelContext); } } 1.2.4. Auto-detecting Camel routes Camel auto-configuration collects all the RouteBuilder instances from the Spring context and automatically injects them into the provided CamelContext . This means that creating new Camel routes with the Spring Boot starter is as simple as adding the @Component annotated class to your classpath: @Component public class MyRouter extends RouteBuilder { @Override public void configure() throws Exception { from("jms:invoices").to("file:/invoices"); } } Or creating a new route RouteBuilder bean in your @Configuration class: @Configuration public class MyRouterConfiguration { @Bean RoutesBuilder myRouter() { return new RouteBuilder() { @Override public void configure() throws Exception { from("jms:invoices").to("file:/invoices"); } }; } } 1.2.5. Camel properties Spring Boot auto-configuration automatically connects to Spring Boot external configuration (which may contain properties placeholders, OS environment variables or system properties) with the Camel properties support. It basically means that any property defined in application.properties file: route.from = jms:invoices Or set via system property: java -Droute.to=jms:processed.invoices -jar mySpringApp.jar can be used as placeholders in Camel route: @Component public class MyRouter extends RouteBuilder { @Override public void configure() throws Exception { from("{{route.from}}").to("{{route.to}}"); } } 1.2.6. Custom Camel context configuration If you want to perform some operations on CamelContext bean created by Camel auto-configuration, register CamelContextConfiguration instance in your Spring context: @Configuration public class MyAppConfig { @Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override void beforeApplicationStart(CamelContext context) { // your custom configuration goes here } }; } } The method beforeApplicationStart will be called just before the Spring context is started, so the CamelContext instance passed to this callback is fully auto-configured. If you add multiple instances of CamelContextConfiguration into your Spring context, each instance is executed. 1.2.7. Auto-configured consumer and producer templates Camel auto-configuration provides pre-configured ConsumerTemplate and ProducerTemplate instances. You can simply inject them into your Spring-managed beans: @Component public class InvoiceProcessor { @Autowired private ProducerTemplate producerTemplate; @Autowired private ConsumerTemplate consumerTemplate; public void processNextInvoice() { Invoice invoice = consumerTemplate.receiveBody("jms:invoices", Invoice.class); ... producerTemplate.sendBody("netty-http:http://invoicing.com/received/" + invoice.id()); } } By default, consumer templates and producer templates come with the endpoint cache sizes set to 1000. You can change these values by modifying the following Spring properties: camel.springboot.consumer-template-cache-size = 100 camel.springboot.producer-template-cache-size = 200 1.2.8. Auto-configured TypeConverter Camel auto-configuration registers a TypeConverter instance named typeConverter in the Spring context. @Component public class InvoiceProcessor { @Autowired private TypeConverter typeConverter; public long parseInvoiceValue(Invoice invoice) { String invoiceValue = invoice.grossValue(); return typeConverter.convertTo(Long.class, invoiceValue); } } 1.2.8.1. Spring type conversion API bridge Spring comes with the powerful type conversion API . The Spring API is similar to the Camel type converter API. As both APIs are so similar, Camel Spring Boot automatically registers a bridge converter ( SpringTypeConverter ) that delegates to the Spring conversion API. This means that out-of-the-box Camel will treat Spring Converters like Camel ones. With this approach you can use both Camel and Spring converters accessed via Camel TypeConverter API: @Component public class InvoiceProcessor { @Autowired private TypeConverter typeConverter; public UUID parseInvoiceId(Invoice invoice) { // Using Spring's StringToUUIDConverter UUID id = invoice.typeConverter.convertTo(UUID.class, invoice.getId()); } } Under the hood Camel Spring Boot delegates conversion to the Spring's ConversionService instances available in the application context. If no ConversionService instance is available, Camel Spring Boot auto-configuration will create one for you. 1.2.9. Keeping the application alive Camel applications which have this feature enabled launch a new thread on startup for the sole purpose of keeping the application alive by preventing JVM termination. This means that after you start a Camel application with Spring Boot, your application waits for a Ctrl+C signal and does not exit immediately. The controller thread can be activated using the camel.springboot.main-run-controller to true . camel.springboot.main-run-controller = true Applications using web modules (for example, applications that import the org.springframework.boot:spring-boot-web-starter module), usually don't need to use this feature because the application is kept alive by the presence of other non-daemon threads. 1.2.10. Adding XML routes By default, you can put Camel XML routes in the classpath under the directory camel, which camel-spring-boot will auto-detect and include. You can configure the directory name or turn this off using the configuration option: # turn off camel.springboot.routes-include-pattern = false # scan only in the com/foo/routes classpath camel.springboot.routes-include-pattern = classpath:com/foo/routes/*.xml The XML files should be Camel XML routes ( not <CamelContext> ) such as: <routes xmlns="http://camel.apache.org/schema/spring"> <route id="test"> <from uri="timer://trigger"/> <transform> <simple>ref:myBean</simple> </transform> <to uri="log:out"/> </route> </routes> 1.2.11. Testing the JUnit 5 way For testing, Maven users will need to add the following dependencies to their pom.xml : <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <version>2.7.18</version> <!-- Use the same version as your Spring Boot version --> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-test-spring-junit5</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> <scope>test</scope> </dependency> To test a Camel Spring Boot application, annotate your test class(es) with @CamelSpringBootTest . This brings Camel's Spring Test support to your application, so that you can write tests using Spring Boot test conventions . To get the CamelContext or ProducerTemplate , you can inject them into the class in the normal Spring manner, using @Autowired . You can also use camel-test-spring-junit5 to configure tests declaratively. This example uses the @MockEndpoints annotation to auto-mock an endpoint: @CamelSpringBootTest @SpringBootApplication @MockEndpoints("direct:end") public class MyApplicationTest { @Autowired private ProducerTemplate template; @EndpointInject("mock:direct:end") private MockEndpoint mock; @Test public void testReceive() throws Exception { mock.expectedBodiesReceived("Hello"); template.sendBody("direct:start", "Hello"); mock.assertIsSatisfied(); } } 1.3. Component Starters Camel Spring Boot supports the following Camel artifacts as Spring Boot Starters: Table 1.1, "Camel Components" Table 1.2, "Camel Data Formats" Table 1.3, "Camel Languages" Table 1.4, "Miscellaneous Extensions" Note Reference documentation is not yet available for some of the artifacts listed below. This documentation will be released as soon as it is available. Table 1.1. Camel Components Component Artifact Description AMQP camel-amqp-starter Messaging with AMQP protocol using Apache QPid Client. AWS Cloudwatch camel-aws2-cw-starter Sending metrics to AWS CloudWatch using AWS SDK version 2.x. AWS DynamoDB camel-aws2-ddb-starter Store and retrieve data from AWS DynamoDB service using AWS SDK version 2.x. AWS Kinesis camel-aws2-kinesis-starter Consume and produce records from and to AWS Kinesis Streams using AWS SDK version 2.x. AWS Lambda camel-aws2-lambda-starter Manage and invoke AWS Lambda functions using AWS SDK version 2.x. AWS S3 Storage Service camel-aws2-s3-starter Store and retrieve objects from AWS S3 Storage Service using AWS SDK version 2.x. AWS Simple Notification System (SNS) camel-aws2-sns-starter Send messages to an AWS Simple Notification Topic using AWS SDK version 2.x. AWS Simple Queue Service (SQS) camel-aws2-sqs-starter Send and receive messages to/from AWS SQS service using AWS SDK version 2.x. Azure ServiceBus camel-azure-servicebus-starter Send and receive messages to/from Azure Event Bus. Azure Storage Blob Service camel-azure-storage-blob-starter Store and retrieve blobs from Azure Storage Blob Service using SDK v12. Azure Storage Queue Service camel-azure-storage-queue-starter The azure-storage-queue component is used for storing and retrieving the messages to/from Azure Storage Queue using Azure SDK v12. Bean camel-bean-starter Invoke methods of Java beans stored in Camel registry. Bean Validator camel-bean-validator-starter Validate the message body using the Java Bean Validation API. Browse camel-browse-starter Inspect the messages received on endpoints supporting BrowsableEndpoint. Cassandra CQL camel-cassandraql-starter Integrate with Cassandra 2.0 using the CQL3 API (not the Thrift API). Based on Cassandra Java Driver provided by DataStax. Control Bus camel-controlbus-starter Manage and monitor Camel routes. Cron camel-cron-starter A generic interface for triggering events at times specified through the Unix cron syntax. CXF camel-cxf-soap-starter Expose SOAP WebServices using Apache CXF or connect to external WebServices using CXF WS client. Data Format camel-dataformat-starter Use a Camel Data Format as a regular Camel Component. Dataset camel-dataset-starter Provide data for load and soak testing of your Camel application. Direct camel-direct-starter Call another endpoint from the same Camel Context synchronously. Elastic Search camel-elasticsearch-starter Send requests to ElasticSearch via Java Client API. FHIR camel-fhir-starter Exchange information in the healthcare domain using the FHIR (Fast Healthcare Interoperability Resources) standard. File camel-file-starter Read and write files. FTP camel-ftp-starter Upload and download files to/from FTP servers. Google BigQuery camel-google-bigquery-starter Google BigQuery data warehouse for analytics. Google Pubsub camel-google-pubsub-starter Send and receive messages to/from Google Cloud Platform PubSub Service. HTTP camel-http-starter Send requests to external HTTP servers using Apache HTTP Client 4.x. Infinispan camel-infinispan-starter Read and write from/to Infinispan distributed key/value store and data grid. Jira camel-jira-starter Interact with JIRA issue tracker. JMS camel-jms-starter Sent and receive messages to/from a JMS Queue or Topic. JPA camel-jpa-starter Store and retrieve Java objects from databases using Java Persistence API (JPA). JSLT camel-jslt-starter Query or transform JSON payloads using an JSLT. Kafka camel-kafka-starter Sent and receive messages to/from an Apache Kafka broker. Kamelet camel-kamelet-starter To call Kamelets Language camel-language-starter Execute scripts in any of the languages supported by Camel. Log camel-log-starter Log messages to the underlying logging mechanism. Mail camel-mail-starter Send and receive emails using imap, pop3 and smtp protocols. Mail Microsoft OAuth camel-mail-microsoft-oauth-starter Camel Mail OAuth2 Authenticator for Microsoft Exchange Online MapStruct camel-mapstruct-starter Type Conversion using Mapstruct Master camel-master-starter Have only a single consumer in a cluster consuming from a given endpoint; with automatic failover if the JVM dies. Minio camel-minio-starter Store and retrieve objects from Minio Storage Service using Minio SDK. MLLP camel-mllp-starter Communicate with external systems using the MLLP protocol. Mock camel-mock-starter Test routes and mediation rules using mocks. MongoDB camel-mongodb-starter Perform operations on MongoDB documents and collections. Netty camel-netty-starter Socket level networking using TCP or UDP with Netty 4.x. Paho camel-paho-starter Communicate with MQTT message brokers using Eclipse Paho MQTT Client. Paho MQTT 5 camel-paho-mqtt5-starter Communicate with MQTT message brokers using Eclipse Paho MQTT v5 Client. Quartz camel-quartz-starter Schedule sending of messages using the Quartz 2.x scheduler. Ref camel-ref-starter Route messages to an endpoint looked up dynamically by name in the Camel Registry. REST camel-rest-starter Expose REST services or call external REST services. Salesforce camel-salesforce-starter Communicate with Salesforce using Java DTOs. SAP camel-sap-starter Uses the SAP Java Connector (SAP JCo) library to facilitate bidirectional communication with SAP and the SAP IDoc library to facilitate the transmission of documents in the Intermediate Document (IDoc) format. Scheduler camel-scheduler-starter Generate messages in specified intervals using java.util.concurrent.ScheduledExecutorService. SEDA camel-seda-starter Asynchronously call another endpoint from any Camel Context in the same JVM. Servlet camel-servlet-starter Serve HTTP requests by a Servlet. Slack camel-slack-starter Send and receive messages to/from Slack. Spring Batch camel-spring-batch Send messages to Spring Batch for further processing. Spring JDBC camel-spring-jdbc Access databases through SQL and JDBC with Spring Transaction support. Spring LDAP camel-spring-ldap Perform searches in LDAP servers using filters as the message payload. Spring RabbitMQ camel-spring-rabbitmq Send and receive messages from RabbitMQ using Spring RabbitMQ client. Spring Redis camel-spring-redis Send and receive messages from Redis. Spring Webservice camel-spring-ws You can use this component to integrate with Spring Web Services. It offers client-side support for accessing web services and server-side support for creating your contract-first web services. SQL camel-sql-starter Perform SQL queries using Spring JDBC. Stub camel-stub-starter Stub out any physical endpoints while in development or testing. Telegram camel-telegram-starter Send and receive messages acting as a Telegram Bot Telegram Bot API. Timer camel-timer-starter Generate messages in specified intervals using java.util.Timer. Validator camel-validator-starter Validate the payload using XML Schema and JAXP Validation. Webhook camel-webhook-starter Expose webhook endpoints to receive push notifications for other Camel components. XSLT camel-xslt-starter Transforms XML payload using an XSLT template. Table 1.2. Camel Data Formats Component Artifact Description Avro camel-avro-starter Serialize and deserialize messages using Apache Avro binary data format. Avro Jackson camel-jackson-avro-starter Marshal POJOs to Avro and back using Jackson. Bindy camel-bindy-starter Marshal and unmarshal between POJOs and key-value pair (KVP) format using Camel Bindy HL7 camel-hl7-starter Marshal and unmarshal HL7 (Health Care) model objects using the HL7 MLLP codec. JacksonXML camel-jacksonxml-starter Unmarshal a XML payloads to POJOs and back using XMLMapper extension of Jackson. JAXB camel-jaxb-starter Unmarshal XML payloads to POJOs and back using JAXB2 XML marshalling standard. JSON Gson camel-gson-starter Marshal POJOs to JSON and back using Gson JSON Jackson camel-jackson-starter Marshal POJOs to JSON and back using Jackson Protobuf Jackson camel-jackson-protobuf-starter Marshal POJOs to Protobuf and back using Jackson. SOAP camel-soap-starter Marshal Java objects to SOAP messages and back. Zip File camel-zipfile-starter Compression and decompress streams using java.util.zip.ZipStream. Table 1.3. Camel Languages Language Artifact Description Constant camel-core-starter A fixed value set only once during the route startup. CSimple camel-core-starter Evaluate a compiled simple expression. ExchangeProperty camel-core-starter Gets a property from the Exchange. File camel-core-starter File related capabilities for the Simple language. Header camel-core-starter Gets a header from the Exchange. JSONPath camel-jsonpath-starter Evaluates a JSONPath expression against a JSON message body. Ref camel-core-starter Uses an existing expression from the registry. Simple camel-core-starter Evaluates a Camel simple expression. Tokenize camel-core-starter Tokenize text payloads using delimiter patterns. XML Tokenize camel-xml-jaxp-starter Tokenize XML payloads. XPath camel-xpath-starter Evaluates an XPath expression against an XML payload. XQuery camel-saxon-starter Query and/or transform XML payloads using XQuery and Saxon. Table 1.4. Miscellaneous Extensions Extensions Artifact Description Kamelet Main camel-kamelet-main-starter Main to run Kamelet standalone Openapi Java camel-openapi-java-starter Rest-dsl support for using openapi doc OpenTelemetry camel-opentelemetry-starter Distributed tracing using OpenTelemetry Spring Security camel-spring-security Security using Spring Security YAML DSL camel-yaml-dsl-starter Camel DSL with YAML 1.4. Starter Configuration Clear and accessible configuration is a crucial part of any application. Camel starters fully support Spring Boot's external configuration mechanism. You can also configure them through Spring Beans for more complex use cases. 1.4.1. Using External Configuration Internally, every starter is configured through Spring Boot's ConfigurationProperties . Each configuration parameter can be set in various ways ( application.[properties|json|yaml] files, command line arguments, environments variables etc.). Parameters have the form of camel.[component|language|dataformat].[name].[parameter] For example to configure the URL of the MQTT5 broker you can set: Or to configure the delimeter of the CSV dataformat to be a semicolon(;) you can set: Camel will use the Type Converter mechanism when setting properties to the desired type. You can refer to beans in the Registry using the #bean:name : The Bean would be typically created in Java: @Bean("myjtaTransactionManager") public JmsTransactionManager myjtaTransactionManager(PooledConnectionFactory pool) { JmsTransactionManager manager = new JmsTransactionManager(pool); manager.setDefaultTimeout(45); return manager; } Beans can also be created in configuration files but this is not recommended for complex use cases. 1.4.2. Using Beans Starters can also be created and configured via Spring Beans . Before creating a starter , Camel will first lookup it up in the Registry by it's name if it already exists. For example to configure a Kafka component: @Bean("kafka") public KafkaComponent kafka(KafkaConfiguration kafkaconfiguration){ return ComponentsBuilderFactory.kafka() .brokers("{{kafka.host}}:{{kafka.port}}") .build(); } The Bean name has to be equal to that of the Component, Dataformat or Language that you are configuring. If the Bean name isn't specified in the annotation it will be set to the method name. Typical Camel Spring Boot projects will use a combination of external configuration and Beans to configure an application. For more examples on how to configure your Camel Spring Boot project, please see the example repository . 1.5. Generating a Camel for Spring Boot application using Maven You can generate a Camel Spring Boot application using the Maven archetype org.apache.camel.archetypes:camel-archetype-spring-boot:3.20.1.redhat-00109 . Procedure Run the following command: mvn archetype:generate \ -DarchetypeGroupId=org.apache.camel.archetypes \ -DarchetypeArtifactId=camel-archetype-spring-boot \ -DarchetypeVersion=3.20.1.redhat-00109 \ -DgroupId=com.redhat \ -DartifactId=csb-app \ -Dversion=1.0-SNAPSHOT \ -DinteractiveMode=false Build the application: Run the application: Verify that the application is running by examining the console log for the Hello World output which is generated by the application. 1.6. Deploying a Camel Spring Boot application to OpenShift This guide demonstrates how to deploy a Camel Spring Boot application to OpenShift. Prerequisites You have access to the OpenShift cluster. The OpenShift oc CLI client is installed or you have access to the OpenShift Container Platform web console. Note The certified OpenShift Container platforms are listed in the Camel for Spring Boot Supported Configurations . The Red Hat OpenJDK 11 (ubi8/openjdk-11) container image is used in the following example. Procedure Generate a Camel for Spring Boot application using Maven by following the instructions in section 1.5 Generating a Camel for Spring Boot application using Maven of this guide. Under the directory which the modified pom.xml exists, execute the following command. Verify that the CSB application is running on the pod. 1.7. Applying patch to Camel Spring Boot Using the new patch-maven-plugin mechanism, you can apply a patch to your Red Hat Camel Spring Boot application. This mechanism allows you to change the individual versions provided by different Red Hat application BOMS, for example, camel-spring-boot-bom . The purpose of the patch-maven-plugin is to update the versions of the dependencies listed in the Camel on Spring Boot BOM to the versions specified in the patch metadata that you wish to apply to your applications. The patch-maven-plugin performs the following operations: Retrieve the patch metadata related to current Red Hat application BOMs. Apply the version changes to <dependencyManagement> imported from the BOMs. After the patch-maven-plugin fetches the metadata, it iterates through all managed and direct dependencies of the project where the plugin was declared and replaces the dependency versions (if they match) using CVE/patch metadata. After the versions are replaced, the Maven build continues and progresses through standard Maven project stages. Procedure The following procedure explains how to apply the patch to your application. Add patch-maven-plugin to your project's pom.xml file. The version of the patch-maven-plugin must be the same as the version of the Camel on Spring Boot BOM. <build> <plugins> <<plugin> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>patch-maven-plugin</artifactId> <version>USD{camel-spring-boot-version}</version> <extensions>true</extensions> </plugin> </plugins> </build> When you run any of the mvn clean deploy , mvn validate , or mvn dependency:tree commands, the plugin searches through the project modules to check if the modules use the Red Hat Camel Spring Boot BOM. Only the following is the supported BOM: com.redhat.camel.springboot.platform:camel-spring-boot-bom : for Camel Spring Boot BOM If the plugin does not find the above BOM, the plugin displays the following messages: If the correct BOM is used, the patch metadata is found, but without any patches. The patch-maven-plugin attempts to fetch this Maven metadata. For the projects with Camel Spring Boot BOM, the com.redhat.camel.springboot.platform:redhat-camel-spring-boot-patch-metadata/maven-metadata.xml is resolved. This XML data is the metadata for the artifact with the com.redhat.camel.springboot.platform:redhat-camel-spring-boot-patch-metadata:RELEASE coordinates. Example metadata generated by Maven <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>redhat-camel-spring-boot-patch-metadata</artifactId> <versioning> <release>3.20.1.redhat-00041</release> <versions> <version>3.20.1.redhat-00041</version> </versions> <lastUpdated>20230322103858</lastUpdated> </versioning> </metadata> The patch-maven-plugin parses the metadata to select the version which applies to the current project. This action is possible only for the Maven projects using Camel on Spring Boot BOM with the specific version. Only the metadata that matches the version range or later is applicable, and it fetches only the latest version of the metadata. The patch-maven-plugin collects a list of remote Maven repositories for downloading the patch metadata identified by groupId , artifactId , and version found in steps. These Maven repositories are listed in the project's <repositories> elements in the active profiles, and also the repositories from the settings.xml file. Whether the metadata comes from a remote repository, local repository, or ZIP file, it is analyzed by the patch-maven-plugin . The fetched metadata contains a list of CVEs, and for each CVE, we have a list of the affected Maven artifacts (specified by glob patterns and version ranges) together with a version that contains a fix for a given CVE. For example, <?xml version="1.0" encoding="UTF-8" ?> <<metadata xmlns="urn:redhat:patch-metadata:1"> <product-bom groupId="com.redhat.camel.springboot.platform" artifactId="camel-spring-boot-bom" versions="[3.20,3.21)" /> <cves> </cves> <fixes> <fix id="HF0-1" description="logback-classic (Example) - Version Bump"> <affects groupId="ch.qos.logback" artifactId="logback-classic" versions="[1.0,1.3.0)" fix="1.3.0" /> </fix> </fixes> </metadata> Finally a list of fixes specified in patch metadata is consulted when iterating over all managed dependencies in the current project. These dependencies (and managed dependencies) that match are changed to fixed versions. For example: Skipping the patch If you do not wish to apply a specific patch to your project, the patch-maven-plugin provides a skip option. Assuming that you have already added the patch-maven-plugin to the project's pom.xml file, and you do not wish to alter the versions, you can use one of the following method to skip the patch. Add the skip option to your project's pom.xml file as follows. Or use the -DskipPatch option when running the mvn command as follows. As shown in the above output, the patch-maven-plugin was not invoked, which resulted in the patch not being applied to the application. 1.8. Camel REST DSL OpenApi Maven Plugin The Camel REST DSL OpenApi Maven Plugin supports the following goals. camel-restdsl-openapi:generate - To generate consumer REST DSL RouteBuilder source code from OpenApi specification camel-restdsl-openapi:generate-with-dto - To generate consumer REST DSL RouteBuilder source code from OpenApi specification and with DTO model classes generated via the swagger-codegen-maven-plugin. camel-restdsl-openapi:generate-xml - To generate consumer REST DSL XML source code from OpenApi specification camel-restdsl-openapi:generate-xml-with-dto - To generate consumer REST DSL XML source code from OpenApi specification and with DTO model classes generated via the swagger-codegen-maven-plugin. camel-restdsl-openapi:generate-yaml - To generate consumer REST DSL YAML source code from OpenApi specification camel-restdsl-openapi:generate-yaml-with-dto - To generate consumer REST DSL YAML source code from OpenApi specification and with DTO model classes generated via the swagger-codegen-maven-plugin. 1.8.1. Adding plugin to Maven pom.xml This plugin can be added to your Maven pom.xml file by adding it to the plugins section, for example in a Spring Boot application: The plugin can then be executed using its prefix camel-restdsl-openapi as shown below. 1.8.2. camel-restdsl-openapi:generate The goal of the Camel REST DSL OpenApi Maven Plugin is used to generate REST DSL RouteBuilder implementation source code from Maven. 1.8.3. Options The plugin supports the following options which can be configured from the command line (use -D syntax), or defined in the pom.xml file in the configuration tag. Parameter Default Value Description skip false Set to true to skip code generation filterOperation Used for including only the operation ids specified. Multiple ids can be separated by comma. Wildcards can be used, eg find* to include all operations starting with find . specificationUri src/spec/openapi.json URI of the OpenApi specification, supports filesystem paths, HTTP and classpath resources, by default src/spec/openapi.json within the project directory. Supports JSON and YAML. auth Adds authorization headers when fetching the OpenApi specification definitions remotely. Pass in a URL-encoded string of name:header with a comma separating multiple values. className from title or RestDslRoute Name of the generated class, taken from the OpenApi specification title or set to RestDslRoute by default packageName from host or rest.dsl.generated Name of the package for the generated class, taken from the OpenApi specification host value or rest.dsl.generated by default indent " " Which indenting character(s) to use, by default four spaces, you can use \t to signify tab character outputDirectory generated-sources/restdsl-openapi Where to place the generated source file, by default generated-sources/restdsl-openapi within the project directory destinationGenerator Fully qualified class name of the class that implements org.apache.camel.generator.openapi.DestinationGenerator interface for customizing destination endpoint destinationToSyntax direct:USD{operationId} The default to syntax for the to uri, which is to use the direct component. restConfiguration true Whether to include generation of the rest configuration with detected rest component to be used. apiContextPath Define openapi endpoint path if restConfiguration is set to true. clientRequestValidation false Whether to enable request validation. basePath Overrides the api base path as defined in the OpenAPI specification. requestMappingValues /** Allows generation of custom RequestMapping mapping values. Multiple mapping values can be passed as: <requestMappingValues> <param>/my-api-path/ </param> <param>/my-other-path/ </param> </requestMappingValues> 1.8.4. Spring Boot Project with Servlet component If the Maven project is a Spring Boot project and restConfiguration is enabled and the servlet component is being used as REST component, then this plugin will autodetect the package name (if packageName has not been explicitly configured) where the @SpringBootApplication main class is located, and use the same package name for generating Rest DSL source code and a needed CamelRestController support class. 1.8.5. camel-restdsl-openapi:generate-with-dto Works as generate goal but also generates DTO model classes by automatic executing the swagger-codegen-maven-plugin to generate java source code of the DTO model classes from the OpenApi specification. This plugin has been scoped and limited to only support a good effort set of defaults for using the swagger-codegen-maven-plugin to generate the model DTOs. If you need more power and flexibility then use the Swagger Codegen Maven Plugin directly to generate the DTO and not this plugin. The DTO classes may require additional dependencies such as: 1.8.6. Options The plugin supports the following additional options Parameter Default Value Description swaggerCodegenMavenPluginVersion 3.0.36 The version of the io.swagger.codegen.v3:swagger-codegen-maven-plugin maven plugin to be used. modelOutput Target output path (default is USD{project.build.directory}/generated-sources/openapi) modelPackage io.swagger.client.model The package to use for generated model objects/classes modelNamePrefix Sets the pre- or suffix for model classes and enums modelNameSuffix Sets the pre- or suffix for model classes and enums modelWithXml false Enable XML annotations inside the generated models (only works with libraries that provide support for JSON and XML) configOptions Pass a map of language-specific parameters to swagger-codegen-maven-plugin 1.8.7. camel-restdsl-openapi:generate-xml The camel-restdsl-openapi:generate-xml goal of the Camel REST DSL OpenApi Maven Plugin is used to generate REST DSL XML implementation source code from Maven. 1.8.8. Options The plugin supports the following options which can be configured from the command line (use -D syntax), or defined in the pom.xml file in the <configuration> tag. Parameter Default Value Description skip false Set to true to skip code generation. filterOperation Used for including only the operation ids specified. Multiple ids can be separated by comma. Wildcards can be used, eg find* to include all operations starting with find . specificationUri src/spec/openapi.json URI of the OpenApi specification, supports filesystem paths, HTTP and classpath resources, by default src/spec/openapi.json within the project directory. Supports JSON and YAML. auth Adds authorization headers when fetching the OpenApi specification definitions remotely. Pass in a URL-encoded string of name:header with a comma separating multiple values. outputDirectory generated-sources/restdsl-openapi Where to place the generated source file, by default generated-sources/restdsl-openapi within the project directory fileName camel-rest.xml The name of the XML file as output. blueprint false If enabled generates OSGi Blueprint XML instead of Spring XML. destinationGenerator Fully qualified class name of the class that implements org.apache.camel.generator.openapi.DestinationGenerator interface for customizing destination endpoint destinationToSyntax direct:USD{operationId} The default to syntax for the to uri, which is to use the direct component. restConfiguration true Whether to include generation of the rest configuration with detected rest component to be used. apiContextPath Define openapi endpoint path if restConfiguration is set to true . clientRequestValidation false Whether to enable request validation. basePath Overrides the api base path as defined in the OpenAPI specification. requestMappingValues /** 1.8.9. camel-restdsl-openapi:generate-xml-with-dto Works as generate-xml goal but also generates DTO model classes by automatic executing the swagger-codegen-maven-plugin to generate java source code of the DTO model classes from the OpenApi specification. This plugin has been scoped and limited to only support a good effort set of defaults for using the swagger-codegen-maven-plugin to generate the model DTOs. If you need more power and flexibility then use the Swagger Codegen Maven Plugin directly to generate the DTO and not this plugin. The DTO classes may require additional dependencies such as: 1.8.10. Options The plugin supports the following additional options Parameter Default Value Description swaggerCodegenMavenPluginVersion 3.0.36 The version of the io.swagger.codegen.v3:swagger-codegen-maven-plugin maven plugin to be used. modelOutput Target output path (default is USD{project.build.directory}/generated-sources/openapi) modelPackage io.swagger.client.model The package to use for generated model objects/classes modelNamePrefix Sets the pre- or suffix for model classes and enums modelNameSuffix Sets the pre- or suffix for model classes and enums modelWithXml false Enable XML annotations inside the generated models (only works with libraries that provide support for JSON and XML) configOptions Pass a map of language-specific parameters to swagger-codegen-maven-plugin 1.8.11. camel-restdsl-openapi:generate-yaml The camel-restdsl-openapi:generate-yaml goal of the Camel REST DSL OpenApi Maven Plugin is used to generate REST DSL YAML implementation source code from Maven. 1.8.12. Options The plugin supports the following options which can be configured from the command line (use -D syntax), or defined in the pom.xml file in the <configuration> tag. Parameter Default Value Description skip false Set to true to skip code generation. filterOperation Used for including only the operation ids specified. Multiple ids can be separated by comma. Wildcards can be used, eg find* to include all operations starting with find . specificationUri src/spec/openapi.json URI of the OpenApi specification, supports filesystem paths, HTTP and classpath resources, by default src/spec/openapi.json within the project directory. Supports JSON and YAML. auth Adds authorization headers when fetching the OpenApi specification definitions remotely. Pass in a URL-encoded string of name:header with a comma separating multiple values. outputDirectory generated-sources/restdsl-openapi Where to place the generated source file, by default generated-sources/restdsl-openapi within the project directory fileName camel-rest.xml The name of the XML file as output. destinationGenerator Fully qualified class name of the class that implements org.apache.camel.generator.openapi.DestinationGenerator interface for customizing destination endpoint destinationToSyntax direct:USD{operationId} The default to syntax for the to uri, which is to use the direct component. restConfiguration true Whether to include generation of the rest configuration with detected rest component to be used. apiContextPath Define openapi endpoint path if restConfiguration is set to true . clientRequestValidation false Whether to enable request validation. basePath Overrides the api base path as defined in the OpenAPI specification. requestMappingValues /** 1.8.13. camel-restdsl-openapi:generate-yaml-with-dto Works as generate-yaml goal but also generates DTO model classes by automatic executing the swagger-codegen-maven-plugin to generate java source code of the DTO model classes from the OpenApi specification. This plugin has been scoped and limited to only support a good effort set of defaults for using the swagger-codegen-maven-plugin to generate the model DTOs. If you need more power and flexibility then use the Swagger Codegen Maven Plugin directly to generate the DTO and not this plugin. The DTO classes may require additional dependencies such as: 1.8.14. Options The plugin supports the following additional options Parameter Default Value Description swaggerCodegenMavenPluginVersion 3.0.36 The version of the io.swagger.codegen.v3:swagger-codegen-maven-plugin maven plugin to be used. modelOutput Target output path (default is USD{project.build.directory}/generated-sources/openapi) modelPackage io.swagger.client.model The package to use for generated model objects/classes modelNamePrefix Sets the pre- or suffix for model classes and enums modelNameSuffix Sets the pre- or suffix for model classes and enums modelWithXml false Enable XML annotations inside the generated models (only works with libraries that provide support for JSON and XML) configOptions Pass a map of language-specific parameters to swagger-codegen-maven-plugin 1.9. Support for FIPS Compliance You can install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture. For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change applies when the machines deploy based on the status of an option in the install-config.yaml file, which governs the cluster options that users can change during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when installing the operating system on the machines you plan to use as worker machines. These configuration methods ensure that your cluster meets the requirements of a FIPS compliance audit. Only FIPS Validated / Modules in Process cryptography packages are enabled before the initial system boot. Because you must enable FIPS before your cluster's operating system boots for the first time, you cannot enable FIPS after you deploy a cluster. 1.9.1. FIPS validation in OpenShift Container Platform OpenShift Container Platform uses certain FIPS Validated / Modules in Process modules within RHEL and RHCOS for its operating system components. For example, when users SSH into OpenShift Container Platform clusters and containers, those connections are properly encrypted. OpenShift Container Platform components are written in Go and built with Red Hat's Golang compiler. When you enable FIPS mode for your cluster, all OpenShift Container Platform components that require cryptographic signing call RHEL and RHCOS cryptographic libraries. For more details about FIPS, see FIPS mode attributes and limitations For details on deploying Camel Spring Boot on OpenShift, see How to deploy a Camel Spring Boot application to OpenShift? Details about supported configurations can be found at, Camel for Spring Boot Supported Configurations
[ "<dependencyManagement> <dependencies> <!-- Camel BOM --> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>3.20.1.redhat-00109</version> <type>pom</type> <scope>import</scope> </dependency> <!-- ... other BOMs or dependencies ... --> </dependencies> </dependencyManagement>", "<dependencies> <!-- Camel Starter --> <dependency> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>camel-spring-boot-starter</artifactId> </dependency> <!-- ... other dependencies ... --> </dependencies>", "<dependencies> <!-- ... other dependencies ... --> <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-paho-mqtt5</artifactId> </dependency> </dependencies>", "camel.component.paho-mqtt5.broker-url=tcp://localhost:61616", "import org.apache.camel.builder.RouteBuilder; import org.springframework.stereotype.Component; @Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"...\") .to(\"...\"); } }", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot</artifactId> <version>3.20.1.redhat-00109</version> <!-- use the same version as your Camel core version --> </dependency>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-bom</artifactId> <version>3.20.1.redhat-00109</version> <!-- use the same version as your Camel core version --> </dependency>", "package com.example; import org.apache.camel.builder.RouteBuilder; import org.springframework.stereotype.Component; @Component public class MyRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:foo\").to(\"log:bar\"); } }", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-boot-starter</artifactId> <version>3.20.1.redhat-00109</version> <!-- use the same version as your Camel core version --> </dependency>", "@Configuration public class MyAppConfig { @Autowired CamelContext camelContext; @Bean MyService myService() { return new DefaultMyService(camelContext); } }", "@Component public class MyRouter extends RouteBuilder { @Override public void configure() throws Exception { from(\"jms:invoices\").to(\"file:/invoices\"); } }", "@Configuration public class MyRouterConfiguration { @Bean RoutesBuilder myRouter() { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"jms:invoices\").to(\"file:/invoices\"); } }; } }", "route.from = jms:invoices", "java -Droute.to=jms:processed.invoices -jar mySpringApp.jar", "@Component public class MyRouter extends RouteBuilder { @Override public void configure() throws Exception { from(\"{{route.from}}\").to(\"{{route.to}}\"); } }", "@Configuration public class MyAppConfig { @Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override void beforeApplicationStart(CamelContext context) { // your custom configuration goes here } }; } }", "@Component public class InvoiceProcessor { @Autowired private ProducerTemplate producerTemplate; @Autowired private ConsumerTemplate consumerTemplate; public void processNextInvoice() { Invoice invoice = consumerTemplate.receiveBody(\"jms:invoices\", Invoice.class); producerTemplate.sendBody(\"netty-http:http://invoicing.com/received/\" + invoice.id()); } }", "camel.springboot.consumer-template-cache-size = 100 camel.springboot.producer-template-cache-size = 200", "@Component public class InvoiceProcessor { @Autowired private TypeConverter typeConverter; public long parseInvoiceValue(Invoice invoice) { String invoiceValue = invoice.grossValue(); return typeConverter.convertTo(Long.class, invoiceValue); } }", "@Component public class InvoiceProcessor { @Autowired private TypeConverter typeConverter; public UUID parseInvoiceId(Invoice invoice) { // Using Spring's StringToUUIDConverter UUID id = invoice.typeConverter.convertTo(UUID.class, invoice.getId()); } }", "camel.springboot.main-run-controller = true", "turn off camel.springboot.routes-include-pattern = false", "scan only in the com/foo/routes classpath camel.springboot.routes-include-pattern = classpath:com/foo/routes/*.xml", "<routes xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"test\"> <from uri=\"timer://trigger\"/> <transform> <simple>ref:myBean</simple> </transform> <to uri=\"log:out\"/> </route> </routes>", "<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <version>2.7.18</version> <!-- Use the same version as your Spring Boot version --> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-test-spring-junit5</artifactId> <version>3.20.1.redhat-00056</version> <!-- use the same version as your Camel core version --> <scope>test</scope> </dependency>", "@CamelSpringBootTest @SpringBootApplication @MockEndpoints(\"direct:end\") public class MyApplicationTest { @Autowired private ProducerTemplate template; @EndpointInject(\"mock:direct:end\") private MockEndpoint mock; @Test public void testReceive() throws Exception { mock.expectedBodiesReceived(\"Hello\"); template.sendBody(\"direct:start\", \"Hello\"); mock.assertIsSatisfied(); } }", "camel.component.paho-mqtt5.broker-url=tcp://localhost:61616", "camel.dataformat.csv.delimiter=;", "camel.component.jms.transactionManager=#bean:myjtaTransactionManager", "@Bean(\"myjtaTransactionManager\") public JmsTransactionManager myjtaTransactionManager(PooledConnectionFactory pool) { JmsTransactionManager manager = new JmsTransactionManager(pool); manager.setDefaultTimeout(45); return manager; }", "@Bean(\"kafka\") public KafkaComponent kafka(KafkaConfiguration kafkaconfiguration){ return ComponentsBuilderFactory.kafka() .brokers(\"{{kafka.host}}:{{kafka.port}}\") .build(); }", "mvn archetype:generate -DarchetypeGroupId=org.apache.camel.archetypes -DarchetypeArtifactId=camel-archetype-spring-boot -DarchetypeVersion=3.20.1.redhat-00109 -DgroupId=com.redhat -DartifactId=csb-app -Dversion=1.0-SNAPSHOT -DinteractiveMode=false", "mvn package -f csb-app/pom.xml", "java -jar csb-app/target/csb-app-1.0-SNAPSHOT.jar", "com.redhat.MySpringBootApplication : Started MySpringBootApplication in 3.514 seconds (JVM running for 4.006) Hello World Hello World", "mvn clean -DskipTests oc:deploy -Popenshift", "logs -f dc/csb-app", "<build> <plugins> <<plugin> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>patch-maven-plugin</artifactId> <version>USD{camel-spring-boot-version}</version> <extensions>true</extensions> </plugin> </plugins> </build>", "mvn clean install [INFO] Scanning for projects [INFO] ========== Red Hat Maven patching ========== [INFO] [PATCH] No project in the reactor uses Camel on Spring Boot product BOM. Skipping patch processing. [INFO] [PATCH] Done in 7ms =================================================", "mvn clean install [INFO] Scanning for projects [INFO] ========== Red Hat Maven patching ========== [INFO] [PATCH] Reading patch metadata and artifacts from 2 project repositories [INFO] [PATCH] - redhat-ga-repository: http://maven.repository.redhat.com/ga/ [INFO] [PATCH] - central: https://repo.maven.apache.org/maven2 Downloading from redhat-ga-repository: http://maven.repository.redhat.com/ga/com/redhat/camel/springboot/platform/redhat-camel-spring-boot-patch-metadata/maven-metadata.xml Downloading from central: https://repo.maven.apache.org/maven2/com/redhat/camel/springboot/platform/redhat-camel-spring-boot-patch-metadata/maven-metadata.xml [INFO] [PATCH] Resolved patch descriptor: /path/to/.m2/repository/com/redhat/camel/springboot/platform/redhat-camel-spring-boot-patch-metadata/3.20.1.redhat-00043/redhat-camel-spring-boot-patch-metadata-3.20.1.redhat-00043.xml [INFO] [PATCH] Patch metadata found for com.redhat.camel.springboot.platform/camel-spring-boot-bom/[3.20,3.21) [INFO] [PATCH] Done in 938ms =================================================", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <metadata> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>redhat-camel-spring-boot-patch-metadata</artifactId> <versioning> <release>3.20.1.redhat-00041</release> <versions> <version>3.20.1.redhat-00041</version> </versions> <lastUpdated>20230322103858</lastUpdated> </versioning> </metadata>", "mvn clean install [INFO] Scanning for projects [INFO] ========== Red Hat Maven patching ========== [INFO] [PATCH] Reading patch metadata and artifacts from 2 project repositories [INFO] [PATCH] - MRRC-GA: https://maven.repository.redhat.com/ga [INFO] [PATCH] - central: https://repo.maven.apache.org/maven2", "<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <<metadata xmlns=\"urn:redhat:patch-metadata:1\"> <product-bom groupId=\"com.redhat.camel.springboot.platform\" artifactId=\"camel-spring-boot-bom\" versions=\"[3.20,3.21)\" /> <cves> </cves> <fixes> <fix id=\"HF0-1\" description=\"logback-classic (Example) - Version Bump\"> <affects groupId=\"ch.qos.logback\" artifactId=\"logback-classic\" versions=\"[1.0,1.3.0)\" fix=\"1.3.0\" /> </fix> </fixes> </metadata>", "mvn dependency:tree [INFO] Scanning for projects [INFO] ========== Red Hat Maven patching ========== [INFO] [PATCH] Reading patch metadata and artifacts from 3 project repositories [INFO] [PATCH] - redhat-ga-repository: http://maven.repository.redhat.com/ga/ [INFO] [PATCH] - local: file:///path/to/.m2/repository [INFO] [PATCH] - central: https://repo.maven.apache.org/maven2 [INFO] [PATCH] Resolved patch descriptor:/path/to/.m2/repository/com/redhat/camel/springboot/platform/redhat-camel-spring-boot-patch-metadata/3.20.1.redhat-00043/redhat-camel-spring-boot-patch-metadata-3.20.1.redhat-00043.xml [INFO] [PATCH] Patch metadata found for com.redhat.camel.springboot.platform/camel-spring-boot-bom/[3.20,3.21) [INFO] [PATCH] - patch contains 1 patch fix [INFO] [PATCH] Processing managed dependencies to apply patch fixes [INFO] [PATCH] - HF0-1: logback-classic (Example) - Version Bump [INFO] [PATCH] Applying change ch.qos.logback/logback-classic/[1.0,1.3.0) -> 1.3.0 [INFO] [PATCH] Project com.test:yaml-routes [INFO] [PATCH] - managed dependency: ch.qos.logback/logback-classic/1.2.11 -> 1.3.0 [INFO] [PATCH] Done in 39ms =================================================", "<build> <plugins> <plugin> <groupId>com.redhat.camel.springboot.platform</groupId> <artifactId>patch-maven-plugin</artifactId> <version>USD{camel-spring-boot-version}</version> <extensions>true</extensions> <configuration> <skip>true</skip> </configuration> </plugin> </plugins> </build>", "mvn clean install -DskipPatch [INFO] Scanning for projects [INFO] [INFO] -------------------------< com.example:test-csb >------------------------- [INFO] Building A Camel Spring Boot Route 1.0-SNAPSHOT", "<build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-restdsl-openapi-plugin</artifactId> <version>{CamelCommunityVersion}</version> </plugin> </plugins> </build>", "USDmvn camel-restdsl-openapi:generate", "<dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.10.1</version> </dependency> <dependency> <groupId>io.swagger.core.v3</groupId> <artifactId>swagger-core</artifactId> <version>2.2.8</version> </dependency> <dependency> <groupId>org.threeten</groupId> <artifactId>threetenbp</artifactId> <version>1.6.8</version> </dependency>", "<dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.10.1</version> </dependency> <dependency> <groupId>io.swagger.core.v3</groupId> <artifactId>swagger-core</artifactId> <version>2.2.8</version> </dependency> <dependency> <groupId>org.threeten</groupId> <artifactId>threetenbp</artifactId> <version>1.6.8</version> </dependency>", "<dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.10.1</version> </dependency> <dependency> <groupId>io.swagger.core.v3</groupId> <artifactId>swagger-core</artifactId> <version>2.2.8</version> </dependency> <dependency> <groupId>org.threeten</groupId> <artifactId>threetenbp</artifactId> <version>1.6.8</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/getting_started_with_camel_spring_boot/getting-started-with-camel-spring-boot_csb
14.21. Configuring Memory Tuning
14.21. Configuring Memory Tuning The virsh memtune virtual_machine --parameter size is covered in the Virtualization Tuning and Optimization Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Configuring_memory_Tuning
2.3. The /proc Virtual File System
2.3. The /proc Virtual File System Unlike most file systems, /proc contains neither text nor binary files. Instead, it houses virtual files ; as such, /proc is normally referred to as a virtual file system. These virtual files are typically zero bytes in size, even if they contain a large amount of information. The /proc file system is not used for storage. Its main purpose is to provide a file-based interface to hardware, memory, running processes, and other system components. Real-time information can be retrieved on many system components by viewing the corresponding /proc file. Some of the files within /proc can also be manipulated (by both users and applications) to configure the kernel. The following /proc files are relevant in managing and monitoring system storage: /proc/devices Displays various character and block devices that are currently configured. /proc/filesystems Lists all file system types currently supported by the kernel. /proc/mdstat Contains current information on multiple-disk or RAID configurations on the system, if they exist. /proc/mounts Lists all mounts currently used by the system. /proc/partitions Contains partition block allocation information. For more information about the /proc file system, refer to the Red Hat Enterprise Linux 6 Deployment Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/proc-virt-fs
Chapter 5. View OpenShift Data Foundation Topology
Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_power/viewing-odf-topology_mcg-verify
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/integrating_openstack_identity_with_external_user_management_services/proc_providing-feedback-on-red-hat-documentation
Chapter 60. Servers and Services
Chapter 60. Servers and Services ReaR creates two ISO images instead of one In ReaR, the OUTPUT_URL directive enables specifying location for the ISO image containing the rescue system. Currently, with this directive set, ReaR creates two copies of the ISO image: one in the specified directory and one in the /var/lib/rear/output/ default directory. This requires additional space for the image. This is especially important if a full-system backup is included into the ISO image (using the BACKUP=NETFS and BACKUP_URL=iso:///backup/ configuration). To work around this behavior, delete the extra ISO image once ReaR has finished working or, to avoid having a period of time with double storage consumption, create the image in the default directory and then move it to the desired location manually. There is a request for enhancement to change this behavior and make ReaR create only one copy of the ISO image. (BZ#1320552) The default value of first_valid_uid in dovecot has changed In Red Hat Enterprise Linux 7, the default configuration of first_valid_uid in dovecot was changed to 1000 to match the system wide configuration specified as UID_MIN in the /etc/login.defs file. If a system has UID_MIN manually changed to 500 and is relying on dovecot default value, dovecot will not serve users with IDs lower than first_valid_uid . As a consequence, if you have regular users with id less than 1000 , you have to update first_valid_uid . After you do this, dovecot will work as expected. (BZ# 1280433 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/known_issues_servers_and_services
4.4. Extending the rh-perl524 Software Collection
4.4. Extending the rh-perl524 Software Collection This section describes extending the rh-perl524 Software Collection by building your own dependent Software Collection. Important Examples described in this section only work as expected when extending the rh-perl524 Software Collection with packages that: do not provide any Perl modules, and only depend on Perl modules provided by the rh-perl524 Software Collection. 4.4.1. The h2m144 Software Collection This section contains a commented example of a dependent Software Collection's metapackage. The dependent Software Collection is named h2m144 and contains the help2man Perl package version 1.44.1. The h2m144 Software Collection depends on the rh-perl524 Software Collection. Note the following in the h2m144 Software Collection metapackage: The h2m144 Software Collection metapackage has the following build dependency set: BuildRequires: %{scl_prefix_perl}scldevel This expands to rh-perl524-scldevel . The rh-perl524-scldevel subpackage contains two important macros, %scl_perl and %scl_prefix_perl , and also provides Perl dependency generators. Note that the macros are defined at the top of the metapackage spec file. Although the definitions are not required, they provide a visual hint that the h2m144 Software Collection has been designed to be built on top of the rh-perl524 Software Collection. They also serve as a fallback value. The h2m144-build subpackage has the following dependency set: Requires: %{scl_prefix_perl}scldevel This expands to rh-perl524-scldevel . The purpose of this dependency is to ensure that the macros and dependency generators are always present when building packages for the h2m144 Software Collection. The enable scriptlet for the h2m144 Software Collection contains the following line: . scl_source enable %{scl_perl} Note the dot at the beginning of the line. This line makes the Perl Software Collection start implicitly when the h2m144 Software Collection is started so that the user can only type scl enable h2m144 command instead of scl enable rh-perl524 h2m144 command to run command in the Software Collection environment. The macro file macros.h2m144-config calls the Perl dependency generators, and certain Perl-specific macros used in other packages' spec files. %global scl h2m144 %scl_package %scl # Default values for the rh-perl524 Software Collection. These # will be used when rh-perl524-scldevel is not in the build root. %{!?scl_perl:%global scl_perl rh-perl524} %{!?scl_prefix_perl:%global scl_prefix_perl %{scl_perl}-} # Only for this build, override __perl_requires for the automatic dependency # generator. %global __perl_requires /usr/lib/rpm/perl.req.stack Summary: Package that installs %scl Name: %scl_name Version: 1 Release: 1%{?dist} License: GPLv2+ BuildRequires: scl-utils-build # Always make sure that there is the rh-perl524-scldevel # package in the build root. BuildRequires: %{scl_prefix_perl}scldevel # Require rh-perl524-perl-macros; you will need macros from that package. BuildRequires: %{scl_prefix_perl}perl-macros Requires: %{scl_prefix}help2man %description This is the main package for %scl Software Collection. %package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils Requires: %{scl_prefix_perl}runtime %description runtime Package shipping essential scripts to work with %scl Software Collection. %package build Summary: Package shipping basic build configuration Requires: scl-utils-build # Require rh-perl524-scldevel so that there is always access to the %%scl_perl # and %%scl_prefix_perl macros in builds for this Software Collection. Requires: %{scl_prefix_perl}scldevel %description build Package shipping essential configuration macros to build %scl Software Collection. %prep %setup -c -T %build %install %scl_install # Create the enable scriptlet that: # - Adds an additional load path for the Perl interpreter. # - Runs scl_source so that you can run: # scl enable h2m144 'bash' # instead of: # scl enable rh-perl524 h2m144 'bash' cat >> %{buildroot}%{_scl_scripts}/enable << EOF . scl_source enable %{scl_perl} export PATH="%{_bindir}:%{_sbindir}\USD{PATH:+:\USD{PATH}}" export MANPATH="%{_mandir}:\USD{MANPATH:-}" EOF cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl}-config << EOF %%scl_package_override() %%{expand:%%global __perl_requires /usr/lib/rpm/perl.req.stack %%global __perl_provides /usr/lib/rpm/perl.prov.stack %%global __perl %{_scl_prefix}/%{scl_perl}/root/usr/bin/perl } EOF %files %files runtime -f filelist %scl_files %files build %{_root_sysconfdir}/rpm/macros.%{scl}-config %changelog * Tue Apr 22 2014 John Doe <[email protected]> - 1-1 - Initial package. 4.4.2. The help2man Package Below is a commented example of the help2man package spec file. Note the following in the spec file: The BuildRequires tags are prefixed with %{?scl_prefix_perl} instead of %{scl_prefix} . %{?scl:%scl_package help2man} %{!?scl:%global pkg_name %{name}} # Supported build option: # # --with nls ... build this package with --enable-nls %bcond_with nls Name: %{?scl_prefix}help2man Summary: Create simple man pages from --help output Version: 1.44.1 Release: 1%{?dist} Group: Development/Tools License: GPLv3+ URL: http://www.gnu.org/software/help2man Source: ftp://ftp.gnu.org/gnu/help2man/help2man-%{version}.tar.xz %{!?with_nls:BuildArch: noarch} BuildRequires: %{?scl_prefix_perl}perl(Getopt::Long) BuildRequires: %{?scl_prefix_perl}perl(POSIX) BuildRequires: %{?scl_prefix_perl}perl(Text::ParseWords) BuildRequires: %{?scl_prefix_perl}perl(Text::Tabs) BuildRequires: %{?scl_prefix_perl}perl(strict) %{?with_nls:BuildRequires: %{?scl_prefix_perl}perl(Locale::gettext) /usr/bin/msgfmt} %{?with_nls:BuildRequires: %{?scl_prefix_perl}perl(Encode)} %{?with_nls:BuildRequires: %{?scl_prefix_perl}perl(I18N::Langinfo)} Requires: %{?scl_prefix_perl}perl(:MODULE_COMPAT_%(%{?scl:scl enable %{scl_perl} '}eval "`perl -V:version`"; echo USDversion%{?scl:'})) Requires(post): /sbin/install-info Requires(preun): /sbin/install-info %description help2man is a script to create simple man pages from the --help and --version output of programs. Since most GNU documentation is now in info format, this provides a way to generate a placeholder man page pointing to that resource while still providing some useful information. %prep %setup -q -n help2man-%{version} %build %configure --%{!?with_nls:disable}%{?with_nls:enable}-nls --libdir=%{_libdir}/help2man %{?scl:scl enable %{scl} "} make %{?_smp_mflags} %{?scl:"} %install %{?scl:scl enable %{scl} "} make install_l10n DESTDIR=USDRPM_BUILD_ROOT %{?scl:"} %{?scl:scl enable %{scl} "} make install DESTDIR=USDRPM_BUILD_ROOT %{?scl:"} %find_lang %pkg_name --with-man %post /sbin/install-info %{_infodir}/help2man.info %{_infodir}/dir 2>/dev/null || : %preun if [ USD1 -eq 0 ]; then /sbin/install-info --delete %{_infodir}/help2man.info \ %{_infodir}/dir 2>/dev/null || : fi %files -f %pkg_name.lang %doc README NEWS THANKS COPYING %{_bindir}/help2man %{_infodir}/* %{_mandir}/man1/* %if %{with nls} %{_libdir}/help2man %endif %changelog * Tue Apr 22 2014 John Doe <[email protected]> - 1.44.1-1 - Built for h2m144 SCL. 4.4.3. Building the h2m144 Software Collection To build the h2m144 Software Collection: Install the rh-perl524-scldevel and rh-perl524-perl-macros packages that are part of the perl524 Software Collection. Build h2m144.spec and install the h2m144-runtime and h2m144-build packages. Install the rh-perl524-perl , rh-perl524-perl-Text-ParseWords and rh-perl524-perl-Getopt-Long packages, which are all build requirements for help2man . Build help2man.spec . 4.4.4. Testing the h2m144 Software Collection To test the h2m144 Software Collection: Install the h2m144-help2man package. Run the following command: Verify that the output is similar to the following lines:
[ "BuildRequires: %{scl_prefix_perl}scldevel", "Requires: %{scl_prefix_perl}scldevel", ". scl_source enable %{scl_perl}", "%global scl h2m144 %scl_package %scl Default values for the rh-perl524 Software Collection. These will be used when rh-perl524-scldevel is not in the build root. %{!?scl_perl:%global scl_perl rh-perl524} %{!?scl_prefix_perl:%global scl_prefix_perl %{scl_perl}-} Only for this build, override __perl_requires for the automatic dependency generator. %global __perl_requires /usr/lib/rpm/perl.req.stack Summary: Package that installs %scl Name: %scl_name Version: 1 Release: 1%{?dist} License: GPLv2+ BuildRequires: scl-utils-build Always make sure that there is the rh-perl524-scldevel package in the build root. BuildRequires: %{scl_prefix_perl}scldevel Require rh-perl524-perl-macros; you will need macros from that package. BuildRequires: %{scl_prefix_perl}perl-macros Requires: %{scl_prefix}help2man %description This is the main package for %scl Software Collection. %package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils Requires: %{scl_prefix_perl}runtime %description runtime Package shipping essential scripts to work with %scl Software Collection. %package build Summary: Package shipping basic build configuration Requires: scl-utils-build Require rh-perl524-scldevel so that there is always access to the %%scl_perl and %%scl_prefix_perl macros in builds for this Software Collection. Requires: %{scl_prefix_perl}scldevel %description build Package shipping essential configuration macros to build %scl Software Collection. %prep %setup -c -T %build %install %scl_install Create the enable scriptlet that: - Adds an additional load path for the Perl interpreter. - Runs scl_source so that you can run: scl enable h2m144 'bash' instead of: scl enable rh-perl524 h2m144 'bash' cat >> %{buildroot}%{_scl_scripts}/enable << EOF . scl_source enable %{scl_perl} export PATH=\"%{_bindir}:%{_sbindir}\\USD{PATH:+:\\USD{PATH}}\" export MANPATH=\"%{_mandir}:\\USD{MANPATH:-}\" EOF cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl}-config << EOF %%scl_package_override() %%{expand:%%global __perl_requires /usr/lib/rpm/perl.req.stack %%global __perl_provides /usr/lib/rpm/perl.prov.stack %%global __perl %{_scl_prefix}/%{scl_perl}/root/usr/bin/perl } EOF %files %files runtime -f filelist %scl_files %files build %{_root_sysconfdir}/rpm/macros.%{scl}-config %changelog * Tue Apr 22 2014 John Doe <[email protected]> - 1-1 - Initial package.", "%{?scl:%scl_package help2man} %{!?scl:%global pkg_name %{name}} Supported build option: # --with nls ... build this package with --enable-nls %bcond_with nls Name: %{?scl_prefix}help2man Summary: Create simple man pages from --help output Version: 1.44.1 Release: 1%{?dist} Group: Development/Tools License: GPLv3+ URL: http://www.gnu.org/software/help2man Source: ftp://ftp.gnu.org/gnu/help2man/help2man-%{version}.tar.xz %{!?with_nls:BuildArch: noarch} BuildRequires: %{?scl_prefix_perl}perl(Getopt::Long) BuildRequires: %{?scl_prefix_perl}perl(POSIX) BuildRequires: %{?scl_prefix_perl}perl(Text::ParseWords) BuildRequires: %{?scl_prefix_perl}perl(Text::Tabs) BuildRequires: %{?scl_prefix_perl}perl(strict) %{?with_nls:BuildRequires: %{?scl_prefix_perl}perl(Locale::gettext) /usr/bin/msgfmt} %{?with_nls:BuildRequires: %{?scl_prefix_perl}perl(Encode)} %{?with_nls:BuildRequires: %{?scl_prefix_perl}perl(I18N::Langinfo)} Requires: %{?scl_prefix_perl}perl(:MODULE_COMPAT_%(%{?scl:scl enable %{scl_perl} '}eval \"`perl -V:version`\"; echo USDversion%{?scl:'})) Requires(post): /sbin/install-info Requires(preun): /sbin/install-info %description help2man is a script to create simple man pages from the --help and --version output of programs. Since most GNU documentation is now in info format, this provides a way to generate a placeholder man page pointing to that resource while still providing some useful information. %prep %setup -q -n help2man-%{version} %build %configure --%{!?with_nls:disable}%{?with_nls:enable}-nls --libdir=%{_libdir}/help2man %{?scl:scl enable %{scl} \"} make %{?_smp_mflags} %{?scl:\"} %install %{?scl:scl enable %{scl} \"} make install_l10n DESTDIR=USDRPM_BUILD_ROOT %{?scl:\"} %{?scl:scl enable %{scl} \"} make install DESTDIR=USDRPM_BUILD_ROOT %{?scl:\"} %find_lang %pkg_name --with-man %post /sbin/install-info %{_infodir}/help2man.info %{_infodir}/dir 2>/dev/null || : %preun if [ USD1 -eq 0 ]; then /sbin/install-info --delete %{_infodir}/help2man.info %{_infodir}/dir 2>/dev/null || : fi %files -f %pkg_name.lang %doc README NEWS THANKS COPYING %{_bindir}/help2man %{_infodir}/* %{_mandir}/man1/* %if %{with nls} %{_libdir}/help2man %endif %changelog * Tue Apr 22 2014 John Doe <[email protected]> - 1.44.1-1 - Built for h2m144 SCL.", "scl enable h2m144 'help2man bash'", ".\\\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.44.1. .TH BASH, \"1\" \"April 2014\" \"bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)\" \"User Commands\" .SH NAME bash, \\- manual page for bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) .SH SYNOPSIS .B bash [\\fIGNU long option\\fR] [\\fIoption\\fR] .SH DESCRIPTION GNU bash, version 4.1.2(1)\\-release\\-(x86_64\\-redhat\\-linux\\-gnu) .IP bash [GNU long option] [option] script\\-file .SS \"GNU long options:\" .HP \\fB\\-\\-debug\\fR" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-extending_the_perl_software_collection
Chapter 9. Capability trimming
Chapter 9. Capability trimming When building a bootable JAR, you can decide which JBoss EAP features and subsystems to include. Note Capability trimming is supported only on OpenShift or when building a bootable JAR. Additional resources About the bootable JAR 9.1. Available JBoss EAP layers Red Hat makes available a number of layers to customize provisioning of the JBoss EAP server in OpenShift or a bootable JAR. Three layers are base layers that provide core functionality. The other layers are decorator layers that enhance the base layers with additional capabilities. Most decorator layers can be used to build S2I images in JBoss EAP for OpenShift or to build a bootable JAR. A few layers do not support S2I images; the description of the layer notes this limitation. Note Only the listed layers are supported. Layers not listed here are not supported. 9.1.1. Base layers Each base layer includes core functionality for a typical server user case. datasources-web-server This layer includes a servlet container and the ability to configure a datasource. This layer does not include MicroProfile capabilities. The following Jakarta EE specifications are supported in this layer: Jakarta JSON Processing 1.1 Jakarta JSON Binding 1.0 Jakarta Servlet 4.0 Jakarta Expression Language 3.0 Jakarta Server Pages 2.3 Jakarta Standard Tag Library 1.2 Jakarta Concurrency 1.1 Jakarta Annotations 1.3 Jakarta XML Binding 2.3 Jakarta Debugging Support for Other Languages 1.0 Jakarta Transactions 1.3 Jakarta Connectors 1.7 jaxrs-server This layer enhances the datasources-web-server layer with the following JBoss EAP subsystems: jaxrs weld jpa This layer also adds Infinispan-based second-level entity caching locally in the container. The following MicroProfile capability is included in this layer: MicroProfile REST Client The following Jakarta EE specifications are supported in this layer in addition to those supported in the datasources-web-server layer: Jakarta Contexts and Dependency Injection 2.0 Jakarta Bean Validation 2.0 Jakarta Interceptors 1.2 Jakarta RESTful Web Services 2.1 Jakarta Persistence 2.2 cloud-server This layer enhances the jaxrs-server layer with the following JBoss EAP subsystems: resource-adapters messaging-activemq (remote broker messaging, not embedded messaging) This layer also adds the following observability features to the jaxrs-server layer: MicroProfile Health MicroProfile Config The following Jakarta EE specification is supported in this layer in addition to those supported in the jaxrs-server layer: Jakarta Security 1.0 cloud-default-mp-config This layer provisions a server with standalone configuration based on the standalone-microprofile-ha.xml file. The cloud-default-mp-layer is provided by the org.jboss.eap.xp:eap-xp-cloud-galleon-pack and is supported for JBoss EAP XP S2I build, but not for a Bootable JAR. For more information about the server configuration files included in JBoss EAP XP, see the Standalone server configuration files section. This workflow uses the microprofile-config quickstart as an example. The quickstart provides a small, specific working example that can be used as a reference for your own project. See the microprofile-config quickstart that ships with JBoss EAP XP 5.0.0 for more information. ee-core-profile-server The ee-core-profile-server layer provisions a server with the Jakarta EE 10 Core Profile. The Core Profile provides a small, lightweight profile for users that provides both core JBoss EAP server functionality and Jakarta EE APIs. The ee-core-profile-server layer is best suited for smaller runtimes such as cloud-native applications and microservices. 9.1.2. Decorator layers Decorator layers are not used alone. You can configure one or more decorator layers with a base layer to deliver additional functionality. ejb-lite This decorator layer adds a minimal Jakarta Enterprise Beans implementation to the provisioned server. The following support is not included in this layer: IIOP integration MDB instance pool Remote connector resource This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. Jakarta Enterprise Beans This decorator layer extends the ejb-lite layer. This layer adds the following support to the provisioned server, in addition to the base functionality included in the ejb-lite layer: MDB instance pool Remote connector resource Use this layer if you want to use message-driven beans (MDBs) or Jakarta Enterprise Beans remoting capabilities, or both. If you do not need these capabilities, use the ejb-lite layer. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. ejb-local-cache This decorator layer adds local caching support for Jakarta Enterprise Beans to the provisioned server. Dependencies : You can only include this layer if you have included the ejb-lite layer or the ejb layer. Note This layer is not compatible with the ejb-dist-cache layer. If you include the ejb-dist-cache layer, you cannot include the ejb-local-cache layer. If you include both layers, the resulting build might include an unexpected Jakarta Enterprise Beans configuration. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. ejb-dist-cache This decorator layer adds distributed caching support for Jakarta Enterprise Beans to the provisioned server. Dependencies : You can only include this layer if you have included the ejb-lite layer or the ejb layer. Note This layer is not compatible with the ejb-local-cache layer. If you include the ejb-dist-cache layer, you cannot include the ejb-local-cache layer. If you include both layers, the resulting build might result in an unexpected configuration. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. jdr This decorator layer adds the JBoss Diagnostic Reporting ( jdr ) subsystem to gather diagnostic data when requesting support from Red Hat. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. Jakarta Persistence This decorator layer adds persistence capabilities for a single-node server. Note that distributed caching only works if the servers are able to form a cluster. The layer adds Hibernate libraries to the provisioned server, with the following support: Configurations of the jpa subsystem Configurations of the infinispan subsystem A local Hibernate cache container Note This layer is not compatible with the jpa-distributed layer. If you include the jpa layer, you cannot include the jpa-distributed layer. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. jpa-distributed This decorator layer adds persistence capabilities for servers operating in a cluster. The layer adds Hibernate libraries to the provisioned server, with the following support: Configurations of the jpa subsystem Configurations of the infinispan subsystem A local Hibernate cache container Invalidation and replication Hibernate cache containers Configuration of the jgroups subsystem Note This layer is not compatible with the jpa layer. If you include the jpa layer, you cannot include the jpa-distributed layer. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. Jakarta Server Faces This decorator layer adds the jsf subsystem to the provisioned server. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. microprofile-platform This decorator layer adds the following MicroProfile capabilities to the provisioned server: MicroProfile Config MicroProfile Fault Tolerance MicroProfile Health MicroProfile JWT MicroProfile OpenAPI Note This layer includes MicroProfile capabilities that are also included in the observability layer. If you include this layer, you do not need to include the observability layer. observability This decorator layer adds the following observability features to the provisioned server: MicroProfile Health MicroProfile Config Note This layer is built in to the cloud-server layer. You do not need to add this layer to the cloud-server layer. remote-activemq This decorator layer adds the ability to communicate with a remote ActiveMQ broker to the provisioned server, integrating messaging support. The pooled connection factory configuration specifies guest as the value for the user and password attributes. You can use a CLI script to change these values at runtime. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. sso This decorator layer adds Red Hat Single Sign-On integration to the provisioned server. This layer should only be used when provisioning a server using S2I. web-console This decorator layer adds the management console to the provisioned server. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. web-clustering This decorator layer adds support for distributable web applications by configuring a non-local Infinispan-based container web cache for data session handling suitable to clustering environments. web-passivation This decorator layer adds support for distributable web applications by configuring a local Infinispan-based container web cache for data session handling suitable to single node environments. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. webservices This layer adds web services functionality to the provisioned server, supporting Jakarta web services deployments. This layer is only supported when building a bootable JAR. This layer is not supported when using S2I. Additional resources Pooled Connection Factory Attributes
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/capability-trimming_default
Chapter 1. Configuring and deploying OpenShift Lightspeed
Chapter 1. Configuring and deploying OpenShift Lightspeed After the OpenShift Lightspeed Operator is installed, configuring and deploying OpenShift Lightspeed consists of three tasks. First, you create a credential secret using the credentials for your Large Language Model (LLM) provider. , you create the OLSConfig custom resource (CR) that the Operator uses to deploy the service. Finally, you verify that the Lightspeed service is operating. Note The instructions assume that you are installing OpenShift Lightspeed using the kubeadmin user account. If you are using a regular user account with cluster-admin privileges, read the section of the documentation that discusses RBAC. 1.1. Creating the credentials secret by using the web console Create a file that is associated with the API token used to access the API of your large language model (LLM) provider. Typically, you use API tokens to authenticate your LLM provider. Alternatively, Microsoft Azure also supports authentication using Microsoft Entra ID. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Alternatively, you are logged in to a user account that has permission to create a secret to store the Provider tokens. You have installed the OpenShift Lightspeed Operator. Procedure Click Add in the upper-right corner of the OpenShift web console. Paste the YAML content for the LLM provider that you are using into the text area of the web console. Note The YAML parameter is always apitoken regardless of what the LLM provider calls the access details. Credential secret for LLM provider apiVersion: v1 kind: Secret metadata: name: credentials namespace: openshift-lightspeed type: Opaque stringData: apitoken: <api_token> 1 1 The apitoken is not base64 encoded. Credential secret for Red Hat Enterprise Linux AI apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: rhelai-api-keys namespace: openshift-lightspeed type: Opaque Credential secret for Red Hat OpenShift AI apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: rhoai-api-keys namespace: openshift-lightspeed type: Opaque Credential secret for IBM watsonx apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: watsonx-api-keys namespace: openshift-lightspeed type: Opaque Credential secret for Microsoft Azure OpenAI apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: azure-api-keys namespace: openshift-lightspeed type: Opaque Alternatively, for Microsoft Azure OpenAI you can use Microsoft Entra ID to authenticate your LLM provider. Microsoft Entra ID users must configure the required roles for their Microsoft Azure OpenAI resource. For more information, see the official Microsoft Cognitive Services OpenAI Contributor (Microsoft Azure OpenAI Service documentation). Credential secret for Microsoft Entra ID apiVersion: v1 data: client_id: <base64_encoded_client_id> client_secret: <base64_encoded_client_secret> tenant_id: <base64_encoded_tenant_id> kind: Secret metadata: name: azure-api-keys namespace: openshift-lightspeed type: Opaque Click Create . 1.2. Creating the Lightspeed custom resource file using the web console The Custom Resource (CR) file contains information that the Operator uses to deploy OpenShift Lightspeed. The specific content of the CR file is unique for each Large Language Model (LLM) provider. Choose the configuration file that matches your LLM provider. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Alternatively, you are logged in to a user account that has permission to create a cluster-scoped CR file. You have installed the OpenShift Lightspeed Operator. Procedure Click Add in the upper-right corner of the OpenShift web console. Paste the YAML content for the LLM provider you use into the text area of the web console: OpenAI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - name: myOpenai type: openai credentialsSecretRef: name: credentials url: https://api.openai.com/v1 models: - name: gpt-3.5-turbo ols: defaultModel: gpt-3.5-turbo defaultProvider: myOpenai Red Hat Enterprise Linux AI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: openai-api-keys models: - name: models/granite-7b-redhat-lab name: rhelai type: rhelai_vllm url: <URL> 1 ols: defaultProvider: rhelai defaultModel: models/granite-7b-redhat-lab 1 The URL endpoint must end with v1 to be valid. For example, https://http://3.23.103.8:8000/v1 . Red Hat OpenShift AI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: openai-api-keys models: - name: granite-8b-code-instruct-128k name: red_hat_openshift_ai type: rhoai_vllm url: <url> 1 ols: defaultProvider: red_hat_openshift_ai defaultModel: granite-8b-code-instruct-128k 1 The URL endpoint must end with v1 to be valid. For example, https://granite-8b-code-instruct.my-domain.com:443/v1 . Microsoft Azure OpenAI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: credentials deploymentName: <azure_ai_deployment_name> models: - name: gpt-35-turbo-16k name: myAzure type: azure_openai url: <azure_ai_deployment_url> ols: defaultModel: gpt-35-turbo-16k defaultProvider: myAzure IBM watsonx CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - name: myWatsonx type: watsonx credentialsSecretRef: name: credentials url: <ibm_watsonx_deployment_name> projectId: <ibm_watsonx_project_id> models: - name: ibm/granite-13b-chat-v2 ols: defaultModel: ibm/granite-13b-chat-v2 defaultProvider: myWatsonx Click Create . 1.3. Creating the credentials secret by using the CLI Create a file that is associated with the API token used to access the API of your large language model (LLM) provider. Typically, you use API tokens to authenticate your LLM provider. Alternatively, Microsoft Azure also supports authentication using Microsoft Entra ID. Prerequisites You have access to the OpenShift CLI (oc) as a user with the cluster-admin role. Alternatively, you are logged in to a user account that has permission to create a secret to store the Provider tokens. You have installed the OpenShift Lightspeed Operator. Procedure Create a YAML file that contains the content for the LLM provider that you are using. Note The YAML parameter is always apitoken regardless of what the LLM provider calls the access details. Credential secret for LLM provider apiVersion: v1 kind: Secret metadata: name: credentials namespace: openshift-lightspeed type: Opaque stringData: apitoken: <api_token> 1 1 The apitoken is not base64 encoded. Credential secret for Red Hat Enterprise Linux AI apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: rhelai-api-keys namespace: openshift-lightspeed type: Opaque Credential secret for Red Hat OpenShift AI apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: rhoai-api-keys namespace: openshift-lightspeed type: Opaque Credential secret for IBM watsonx apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: watsonx-api-keys namespace: openshift-lightspeed type: Opaque Credential secret for Microsoft Azure OpenAI apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: azure-api-keys namespace: openshift-lightspeed type: Opaque Alternatively, for Microsoft Azure OpenAI you can use Microsoft Entra ID to authenticate your LLM provider. Microsoft Entra ID users must configure the required roles for their Microsoft Azure OpenAI resource. For more information, see the official Microsoft Cognitive Services OpenAI Contributor (Microsoft Azure OpenAI Service documentation). Credential secret for Microsoft Entra ID apiVersion: v1 data: client_id: <base64_encoded_client_id> client_secret: <base64_encoded_client_secret> tenant_id: <base64_encoded_tenant_id> kind: Secret metadata: name: azure-api-keys namespace: openshift-lightspeed type: Opaque Run the following command to create the secret: USD oc create -f /path/to/secret.yaml 1.4. Creating the Lightspeed custom resource file using the CLI The Custom Resource (CR) file contains information that the Operator uses to deploy OpenShift Lightspeed. The specific content of the CR file is unique for each Large Language Model (LLM) provider. Choose the configuration file that matches your LLM provider. Prerequisites You have access to the OpenShift CLI (oc) and are logged in as a user with the cluster-admin role. Alternatively, you are logged in to a user account that has permission to create a cluster-scoped CR file. You have installed the OpenShift Lightspeed Operator. Procedure Create an OLSConfig file that contains the YAML content for the LLM provider you use: OpenAI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - name: myOpenai type: openai credentialsSecretRef: name: credentials url: https://api.openai.com/v1 models: - name: gpt-3.5-turbo ols: defaultModel: gpt-3.5-turbo defaultProvider: myOpenai Red Hat Enterprise Linux AI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: openai-api-keys models: - name: models/granite-7b-redhat-lab name: rhelai type: rhelai_vllm url: <URL> 1 ols: defaultProvider: rhelai defaultModel: models/granite-7b-redhat-lab additionalCAConfigMapRef: name: openshift-service-ca.crt 1 The URL endpoint must end with v1 to be valid. For example, https://http://3.23.103.8:8000/v1 . Red Hat OpenShift AI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: openai-api-keys models: - name: granite-8b-code-instruct-128k name: red_hat_openshift_ai type: rhoai_vllm url: <url> 1 ols: defaultProvider: red_hat_openshift_ai defaultModel: granite-8b-code-instruct-128k 1 The URL endpoint must end with v1 to be valid. For example, https://granite-8b-code-instruct.my-domain.com:443/v1 . Microsoft Azure OpenAI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: credentials deploymentName: <azure_ai_deployment_name> models: - name: gpt-35-turbo-16k name: myAzure type: azure_openai url: <azure_ai_deployment_url> ols: defaultModel: gpt-35-turbo-16k defaultProvider: myAzure IBM watsonx CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - name: myWatsonx type: watsonx credentialsSecretRef: name: credentials url: <ibm_watsonx_deployment_name> projectId: <ibm_watsonx_project_id> models: - name: ibm/granite-13b-chat-v2 ols: defaultModel: ibm/granite-13b-chat-v2 defaultProvider: myWatsonx Run the following command: USD oc create -f /path/to/config-cr.yaml The Operator deploys OpenShift Lightspeed using the information in YAML configuration file. 1.4.1. Configuring OpenShift Lightspeed with a trust provider certificate for the LLM This procedure explains how to configure OpenShift Lightspeed with a trust provider certificate for the Large Language Model (LLM) provider. Note If the LLM provider you are using requires a trust certificate to authenticate the OpenShift Lightspeed service you must perform this procedure. If the LLM provider does not require a trust certificate to authenticate the service, you should skip this procedure. Procedure Copy the contents of the certificate file and paste it into a file called caCertFileName . Create a ConfigMap object called trusted-certs by running the following command: USD oc create configmap trusted-certs --from-file=caCertFileName Example output kind: ConfigMap apiVersion: v1 metadata: name: trusted-certs namespace: openshift-lightspeed data: caCertFileName: | 1 -----BEGIN CERTIFICATE----- . . . -----END CERTIFICATE----- 1 Specify the CA certificates required to connect to your LLM provider. You can include one or more certificates. Update the OLSConfig custom resource file to include the name of the ConfigMap object you just created. Example Red Hat Enterprise Linux AI CR file apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: ols: defaultProvider: rhelai defaultModel: models/granite-7b-redhat-lab additionalCAConfigMapRef: name: trusted-certs 1 1 Specifies the name of ConfigMap object. Create the custom CR. USD oc apply -f <olfconfig_cr_filename> 1.5. Verifying the OpenShift Lightspeed deployment After the OpenShift Lightspeed service is deployed, verify that it is operating. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. You have access to the OpenShift CLI (oc). You have installed the OpenShift Lightspeed Operator. You have created the credentials secret and the OLSConfig Custom Resource configuration file. Procedure In the OpenShift Container Platform web console, select the Developer perspective from the drop-down list at the top of the pane. Click the Project drop-down lsit. Enable the toggle switch to show default projects. Select openshift-lightspeed from the list. Click Topology . When the circle around the Lightspeed icon turns dark blue, the service is ready. Verify that the OpenShift Lightspeed is ready by running the following command: USD oc logs deployment/lightspeed-app-server -c lightspeed-service-api -n openshift-lightspeed | grep Uvicorn Example output INFO: Uvicorn running on https://0.0.0.0:8443 (Press CTRL+C to quit) 1.6. About Lightspeed and Role Based Access Control (RBAC) Role-Based Access Control (RBAC) is a system security approach to restricting system access to authorized users who have defined roles and permissions. OpenShift Lightspeed RBAC is binary. By default, not all cluster users have access to the OpenShift Lightspeed interface. Access must be granted by a user who can grant permissions. All users of an OpenShift cluster with OpenShift Lightspeed installed can see the Lightspeed button; however, only users with permissions can submit questions to Lightspeed. If you want to evaluate the RBAC features of OpenShift Lightspeed, your cluster will need users other than the kubeadmin account. The kubeadmin account always has access to OpenShift Lightspeed. 1.6.1. Granting access to an individual user This procedure explains how to grant access to an individual user. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Alternatively, you are logged in as a user with the ability to grant permissions. You have deployed the OpenShift Lightspeed service. You have access to the OpenShift CLI (oc). Procedure Run the following command at the command line: USD oc adm policy add-cluster-role-to-user \ lightspeed-operator-query-access <user_name> Alternatively, you can use a YAML file when granting access to an individual user by using the following command: USD oc adm policy add-cluster-role-to-user lightspeed-operator-query-access <user_name> -o yaml --dry-run The terminal returns the following output: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: lightspeed-operator-query-access roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: lightspeed-operator-query-access subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <user_name> 1 1 Enter the actual user name in place of <user_name> before creating the object. Save the output as a YAML file, and run the following command to grant user access: USD oc create -f <yaml_filename> 1.6.2. Granting access to a user group This procedure explains how to grant access to a user group. If your cluster has more advanced identity management configured, including user groups, you can grant all users of a specific group access to the OpenShift Lightspeed service. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Alternatively, you are logged in as a user with the ability to grant permissions. You have deployed the OpenShift Lightspeed service. You have access to the OpenShift CLI (oc). Procedure Run the following command at the command line: USD oc adm policy add-cluster-role-to-group \ lightspeed-operator-query-access <group_name> Alternatively, you can use a YAML file when granting access to a user group by using the following command: USD oc adm policy add-cluster-role-to-group lightspeed-operator-query-access <group_name> -o yaml --dry-run The terminal returns the following output: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: lightspeed-operator-query-access roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: lightspeed-operator-query-access subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: <user_group> 1 1 Enter the actual user group in place of <user_group> before creating the object. Save the output as a YAML file, and run the following command to grant access to the user group: USD oc create -f <yaml_filename> 1.7. Filtering and redacting information You can configure OpenShift Lightspeed to filter or redact information from being sent to the LLM provider. The following example shows how to modify the OLSConfig file to redact IP addresses. Note You should test your regular expressions against sample data to confirm that they are catching the information you want to filter or redact, and that they are not accidentally catching information you do not want to filter or redact. There are several third-party websites that you can use to test your regular expressions. When using third-party sites, you should practice caution with regards to sharing your private data. Alternatively, you can test the regular expressions locally using Python. In Python, it is possible to design very computationally-expensive regular expressions. Using several complex expressions as query filters can adversely impact the performance of OpenShift Lightspeed. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. You have access to the OpenShift CLI (oc). You have installed the OpenShift Lightspeed Operator and deployed the OpenShift Lightspeed service. Procedure Modify the OLSConfig file and create an entry for each regular expression to filter. The following example redacts IP addresses: Example custom resource file spec: ols: queryFilters: - name: ip-address pattern: '((25[0-5]|(2[0-4]|1\d|[1-9]|)\d)\.?\b){4}' replaceWith: <IP_ADDRESS> Run the following command to apply the modified OpenShift Lightspeed custom configuration: USD oc apply -f OLSConfig.yaml
[ "apiVersion: v1 kind: Secret metadata: name: credentials namespace: openshift-lightspeed type: Opaque stringData: apitoken: <api_token> 1", "apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: rhelai-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: rhoai-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: watsonx-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: azure-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: v1 data: client_id: <base64_encoded_client_id> client_secret: <base64_encoded_client_secret> tenant_id: <base64_encoded_tenant_id> kind: Secret metadata: name: azure-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - name: myOpenai type: openai credentialsSecretRef: name: credentials url: https://api.openai.com/v1 models: - name: gpt-3.5-turbo ols: defaultModel: gpt-3.5-turbo defaultProvider: myOpenai", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: openai-api-keys models: - name: models/granite-7b-redhat-lab name: rhelai type: rhelai_vllm url: <URL> 1 ols: defaultProvider: rhelai defaultModel: models/granite-7b-redhat-lab", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: openai-api-keys models: - name: granite-8b-code-instruct-128k name: red_hat_openshift_ai type: rhoai_vllm url: <url> 1 ols: defaultProvider: red_hat_openshift_ai defaultModel: granite-8b-code-instruct-128k", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: credentials deploymentName: <azure_ai_deployment_name> models: - name: gpt-35-turbo-16k name: myAzure type: azure_openai url: <azure_ai_deployment_url> ols: defaultModel: gpt-35-turbo-16k defaultProvider: myAzure", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - name: myWatsonx type: watsonx credentialsSecretRef: name: credentials url: <ibm_watsonx_deployment_name> projectId: <ibm_watsonx_project_id> models: - name: ibm/granite-13b-chat-v2 ols: defaultModel: ibm/granite-13b-chat-v2 defaultProvider: myWatsonx", "apiVersion: v1 kind: Secret metadata: name: credentials namespace: openshift-lightspeed type: Opaque stringData: apitoken: <api_token> 1", "apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: rhelai-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: rhoai-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: watsonx-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: v1 data: apitoken: <api_token> kind: Secret metadata: name: azure-api-keys namespace: openshift-lightspeed type: Opaque", "apiVersion: v1 data: client_id: <base64_encoded_client_id> client_secret: <base64_encoded_client_secret> tenant_id: <base64_encoded_tenant_id> kind: Secret metadata: name: azure-api-keys namespace: openshift-lightspeed type: Opaque", "oc create -f /path/to/secret.yaml", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - name: myOpenai type: openai credentialsSecretRef: name: credentials url: https://api.openai.com/v1 models: - name: gpt-3.5-turbo ols: defaultModel: gpt-3.5-turbo defaultProvider: myOpenai", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: openai-api-keys models: - name: models/granite-7b-redhat-lab name: rhelai type: rhelai_vllm url: <URL> 1 ols: defaultProvider: rhelai defaultModel: models/granite-7b-redhat-lab additionalCAConfigMapRef: name: openshift-service-ca.crt", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: openai-api-keys models: - name: granite-8b-code-instruct-128k name: red_hat_openshift_ai type: rhoai_vllm url: <url> 1 ols: defaultProvider: red_hat_openshift_ai defaultModel: granite-8b-code-instruct-128k", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - credentialsSecretRef: name: credentials deploymentName: <azure_ai_deployment_name> models: - name: gpt-35-turbo-16k name: myAzure type: azure_openai url: <azure_ai_deployment_url> ols: defaultModel: gpt-35-turbo-16k defaultProvider: myAzure", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: llm: providers: - name: myWatsonx type: watsonx credentialsSecretRef: name: credentials url: <ibm_watsonx_deployment_name> projectId: <ibm_watsonx_project_id> models: - name: ibm/granite-13b-chat-v2 ols: defaultModel: ibm/granite-13b-chat-v2 defaultProvider: myWatsonx", "oc create -f /path/to/config-cr.yaml", "oc create configmap trusted-certs --from-file=caCertFileName", "kind: ConfigMap apiVersion: v1 metadata: name: trusted-certs namespace: openshift-lightspeed data: caCertFileName: | 1 -----BEGIN CERTIFICATE----- . . . -----END CERTIFICATE-----", "apiVersion: ols.openshift.io/v1alpha1 kind: OLSConfig metadata: name: cluster spec: ols: defaultProvider: rhelai defaultModel: models/granite-7b-redhat-lab additionalCAConfigMapRef: name: trusted-certs 1", "oc apply -f <olfconfig_cr_filename>", "oc logs deployment/lightspeed-app-server -c lightspeed-service-api -n openshift-lightspeed | grep Uvicorn", "INFO: Uvicorn running on https://0.0.0.0:8443 (Press CTRL+C to quit)", "oc adm policy add-cluster-role-to-user lightspeed-operator-query-access <user_name>", "oc adm policy add-cluster-role-to-user lightspeed-operator-query-access <user_name> -o yaml --dry-run", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: lightspeed-operator-query-access roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: lightspeed-operator-query-access subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <user_name> 1", "oc create -f <yaml_filename>", "oc adm policy add-cluster-role-to-group lightspeed-operator-query-access <group_name>", "oc adm policy add-cluster-role-to-group lightspeed-operator-query-access <group_name> -o yaml --dry-run", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: lightspeed-operator-query-access roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: lightspeed-operator-query-access subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: <user_group> 1", "oc create -f <yaml_filename>", "spec: ols: queryFilters: - name: ip-address pattern: '((25[0-5]|(2[0-4]|1\\d|[1-9]|)\\d)\\.?\\b){4}' replaceWith: <IP_ADDRESS>", "oc apply -f OLSConfig.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_lightspeed/1.0tp1/html/configure/ols-configuring-openshift-lightspeed
8.1 Release Notes
8.1 Release Notes Red Hat Enterprise Linux 8.1 Release Notes for Red Hat Enterprise Linux 8.1 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.1_release_notes/index
Chapter 7. RangeAllocation [security.openshift.io/v1]
Chapter 7. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required range data 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data string data is a byte array representing the serialized state of a range allocation. It is a bitmap with each bit set to one to represent a range is taken. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata range string range is a string representing a unique label for a range of uids, "1000000000-2000000000/10000". 7.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/rangeallocations DELETE : delete collection of RangeAllocation GET : list or watch objects of kind RangeAllocation POST : create a RangeAllocation /apis/security.openshift.io/v1/watch/rangeallocations GET : watch individual changes to a list of RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead. /apis/security.openshift.io/v1/rangeallocations/{name} DELETE : delete a RangeAllocation GET : read the specified RangeAllocation PATCH : partially update the specified RangeAllocation PUT : replace the specified RangeAllocation /apis/security.openshift.io/v1/watch/rangeallocations/{name} GET : watch changes to an object of kind RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.1. /apis/security.openshift.io/v1/rangeallocations HTTP method DELETE Description delete collection of RangeAllocation Table 7.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RangeAllocation Table 7.3. HTTP responses HTTP code Reponse body 200 - OK RangeAllocationList schema 401 - Unauthorized Empty HTTP method POST Description create a RangeAllocation Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body RangeAllocation schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 202 - Accepted RangeAllocation schema 401 - Unauthorized Empty 7.2.2. /apis/security.openshift.io/v1/watch/rangeallocations HTTP method GET Description watch individual changes to a list of RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead. Table 7.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/security.openshift.io/v1/rangeallocations/{name} Table 7.8. Global path parameters Parameter Type Description name string name of the RangeAllocation HTTP method DELETE Description delete a RangeAllocation Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RangeAllocation Table 7.11. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RangeAllocation Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RangeAllocation Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.15. Body parameters Parameter Type Description body RangeAllocation schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK RangeAllocation schema 201 - Created RangeAllocation schema 401 - Unauthorized Empty 7.2.4. /apis/security.openshift.io/v1/watch/rangeallocations/{name} Table 7.17. Global path parameters Parameter Type Description name string name of the RangeAllocation HTTP method GET Description watch changes to an object of kind RangeAllocation. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_apis/rangeallocation-security-openshift-io-v1